You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2011/06/08 16:40:59 UTC
Hadoop-Mapreduce-trunk - Build # 704 - Still Failing
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/704/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235627 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-08 14:35:22,121 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,121 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,124 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,124 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,125 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,125 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,125 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,126 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,126 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,126 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,127 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,127 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,127 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-08 14:35:22,128 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.175 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.316 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 759 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/759/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3750 lines...]
A ivy/hadoop-mapred-instrumented-template.xml
A ivy/ivysettings.xml
A ivy/libraries.properties
A ivy/hadoop-mapred-template.xml
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1159391
At revision 1159391
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1159391
no revision recorded for http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce in the previous build
no revision recorded for http://svn.apache.org/repos/asf/hadoop/nightly in the previous build
no revision recorded for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin in the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson6699444931498677876.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-18_20-09-06 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Recording fingerprints
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Re: Hadoop-Mapreduce-trunk - Build # 758 - Still Failing
Posted by Todd Lipcon <to...@cloudera.com>.
It looks like the thing killing the builds is TestJvmManager.java, not
the eclipse plugin.
Any idea why the Trunk-Commit build is apparently not picking up the
latest common-test artifact? Any luck getting my ssh keys on the
machine so you aren't the only one who can debug this?
-Todd
On Thu, Aug 18, 2011 at 4:58 PM, Giridharan Kesavan
<gk...@hortonworks.com> wrote:
> eclipse plugin requires a fix.
> https://issues.apache.org/jira/browse/MAPREDUCE-2859
>
> On Thu, Aug 18, 2011 at 7:09 AM, Vinod Kumar Vavilapalli
> <vi...@hortonworks.com> wrote:
>> Giri, can you please help with this?
>>
>> Thanks,
>> +Vinod
>>
>>>
>>> On Thu, Aug 18, 2011 at 6:32 PM, Apache Jenkins Server
>>> <je...@builds.apache.org> wrote:
>>>> See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/758/
>>>>
>>>> ###################################################################################
>>>> ########################## LAST 60 LINES OF THE CONSOLE ###########################
>>>> Started by timer
>>>> Building remotely on hadoop7
>>>> Location 'http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce' does not exist
>>>> One or more repository locations do not exist anymore for Hadoop-Mapreduce-trunk, project will be disabled.
>>>> [FINDBUGS] Skipping publisher since build result is FAILURE
>>>> Archiving artifacts
>>>> Publishing Clover coverage report...
>>>> No Clover report will be published due to a Build Failure
>>>> Recording test results
>>>> Publishing Javadoc
>>>> Recording fingerprints
>>>> Email was triggered for: Failure
>>>> Sending email for trigger: Failure
>>>>
>>>>
>>>>
>>>> ###################################################################################
>>>> ############################## FAILED TESTS (if any) ##############################
>>>> No tests ran.
>>>>
>>>
>>
>
--
Todd Lipcon
Software Engineer, Cloudera
Re: Hadoop-Mapreduce-trunk - Build # 758 - Still Failing
Posted by Giridharan Kesavan <gk...@hortonworks.com>.
eclipse plugin requires a fix.
https://issues.apache.org/jira/browse/MAPREDUCE-2859
On Thu, Aug 18, 2011 at 7:09 AM, Vinod Kumar Vavilapalli
<vi...@hortonworks.com> wrote:
> Giri, can you please help with this?
>
> Thanks,
> +Vinod
>
>>
>> On Thu, Aug 18, 2011 at 6:32 PM, Apache Jenkins Server
>> <je...@builds.apache.org> wrote:
>>> See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/758/
>>>
>>> ###################################################################################
>>> ########################## LAST 60 LINES OF THE CONSOLE ###########################
>>> Started by timer
>>> Building remotely on hadoop7
>>> Location 'http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce' does not exist
>>> One or more repository locations do not exist anymore for Hadoop-Mapreduce-trunk, project will be disabled.
>>> [FINDBUGS] Skipping publisher since build result is FAILURE
>>> Archiving artifacts
>>> Publishing Clover coverage report...
>>> No Clover report will be published due to a Build Failure
>>> Recording test results
>>> Publishing Javadoc
>>> Recording fingerprints
>>> Email was triggered for: Failure
>>> Sending email for trigger: Failure
>>>
>>>
>>>
>>> ###################################################################################
>>> ############################## FAILED TESTS (if any) ##############################
>>> No tests ran.
>>>
>>
>
Re: Hadoop-Mapreduce-trunk - Build # 758 - Still Failing
Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Giri, can you please help with this?
Thanks,
+Vinod
>
> On Thu, Aug 18, 2011 at 6:32 PM, Apache Jenkins Server
> <je...@builds.apache.org> wrote:
>> See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/758/
>>
>> ###################################################################################
>> ########################## LAST 60 LINES OF THE CONSOLE ###########################
>> Started by timer
>> Building remotely on hadoop7
>> Location 'http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce' does not exist
>> One or more repository locations do not exist anymore for Hadoop-Mapreduce-trunk, project will be disabled.
>> [FINDBUGS] Skipping publisher since build result is FAILURE
>> Archiving artifacts
>> Publishing Clover coverage report...
>> No Clover report will be published due to a Build Failure
>> Recording test results
>> Publishing Javadoc
>> Recording fingerprints
>> Email was triggered for: Failure
>> Sending email for trigger: Failure
>>
>>
>>
>> ###################################################################################
>> ############################## FAILED TESTS (if any) ##############################
>> No tests ran.
>>
>
Re: Hadoop-Mapreduce-trunk - Build # 758 - Still Failing
Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Giri, can you please help with this?
Apache
Thanks,
+Vinod
On Thu, Aug 18, 2011 at 6:32 PM, Apache Jenkins Server
<je...@builds.apache.org> wrote:
> See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/758/
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE ###########################
> Started by timer
> Building remotely on hadoop7
> Location 'http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce' does not exist
> One or more repository locations do not exist anymore for Hadoop-Mapreduce-trunk, project will be disabled.
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Archiving artifacts
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Recording test results
> Publishing Javadoc
> Recording fingerprints
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any) ##############################
> No tests ran.
>
Hadoop-Mapreduce-trunk - Build # 758 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/758/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
Started by timer
Building remotely on hadoop7
Location 'http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce' does not exist
One or more repository locations do not exist anymore for Hadoop-Mapreduce-trunk, project will be disabled.
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 757 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/757/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2401 lines...]
A src/packages/templates/conf
A src/packages/templates/conf/mapred-site.xml
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1158682
At revision 1158680
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1158680
no change for http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce since the previous build
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson7444964873207566057.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-17_13-00-45 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 756 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/756/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2399 lines...]
A src/packages/rpm/spec/hadoop-mapred.spec
A src/packages/templates
A src/packages/templates/conf
A src/packages/templates/conf/mapred-site.xml
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1158253
At revision 1158253
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1158253
no change for http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson4082610413391268225.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-16_13-00-39 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 755 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/755/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2400 lines...]
A src/packages/templates
A src/packages/templates/conf
A src/packages/templates/conf/mapred-site.xml
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1157831
At revision 1157831
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1157831
no change for http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce since the previous build
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson3080555432284805009.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-15_13-00-39 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 754 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/754/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2406 lines...]
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1157528
At revision 1157528
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1157528
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson4060807752472736493.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-14_13-00-39 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-279
Updating MAPREDUCE-901
Updating MAPREDUCE-2037
Updating MAPREDUCE-2837
Updating MAPREDUCE-2839
Updating MAPREDUCE-2727
Updating MAPREDUCE-2541
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 753 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/753/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2377 lines...]
A src/packages/templates
A src/packages/templates/conf
A src/packages/templates/conf/mapred-site.xml
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1157082
At revision 1157082
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1157082
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson3016420158666964780.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-12_13-00-30 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2187
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 752 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/752/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2380 lines...]
A bin
A bin/mapred-config.sh
AU bin/stop-mapred.sh
AU bin/mapred
AU bin/start-mapred.sh
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1156857
At revision 1156857
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1156857
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin since the previous build
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson3776079390733309452.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-11_23-36-03 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2489
Updating HDFS-2239
Updating MAPREDUCE-2797
Updating MAPREDUCE-2805
Updating HDFS-2241
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 751 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/751/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2396 lines...]
A build-utils.xml
A build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/bin' at -1 into '/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/bin'
AU src/test/bin/smart-apply-patch.sh
AU src/test/bin/test-patch.sh
At revision 1155988
At revision 1155988
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AU tar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AU hudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1155988
No emails were triggered.
[Hadoop-Mapreduce-trunk] $ /bin/bash /tmp/hudson1625496986969247456.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-10_01-26-23 -Declipse.home=/homes/hudson/tools/eclipse/latest -Dfindbugs.home=/homes/hudson/tools/findbugs/latest -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2187
Updating MAPREDUCE-2740
Updating MAPREDUCE-2741
Updating MAPREDUCE-2705
Updating HADOOP-6671
Updating MAPREDUCE-2732
Updating MAPREDUCE-2243
Updating MAPREDUCE-2723
Updating MAPREDUCE-2127
Updating MAPREDUCE-2760
Updating MAPREDUCE-2463
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 750 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/750/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 237844 lines...]
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-27 14:34:33,392 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,393 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,393 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,395 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,395 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,395 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,396 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,396 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,396 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,397 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,397 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,397 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,398 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,398 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,398 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,399 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-27 14:34:33,399 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.149 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.349 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.325 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 90 minutes 53 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2711
Updating MAPREDUCE-2723
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw8w(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 749 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/749/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5079 lines...]
[ivy:resolve] ==== maven2: tried
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar:
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] module not found: org.apache.hadoop#hadoop-common-test;0.23.0-SNAPSHOT
[ivy:resolve] ==== apache-snapshot: tried
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common-test;0.23.0-SNAPSHOT!hadoop-common-test.jar:
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.jar
[ivy:resolve] ==== maven2: tried
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common-test;0.23.0-SNAPSHOT!hadoop-common-test.jar:
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.jar
[ivy:resolve] module not found: org.apache.hadoop#hadoop-hdfs;0.23.0-SNAPSHOT
[ivy:resolve] ==== apache-snapshot: tried
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-hdfs;0.23.0-SNAPSHOT!hadoop-hdfs.jar:
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.jar
[ivy:resolve] ==== maven2: tried
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-hdfs;0.23.0-SNAPSHOT!hadoop-hdfs.jar:
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.jar
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: UNRESOLVED DEPENDENCIES ::
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT: not found
[ivy:resolve] :: org.apache.hadoop#hadoop-common-test;0.23.0-SNAPSHOT: not found
[ivy:resolve] :: org.apache.hadoop#hadoop-hdfs;0.23.0-SNAPSHOT: not found
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :::: ERRORS
[ivy:resolve] SERVER ERROR: Service Temporarily Unavailable url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110726.050305-217.pom
[ivy:resolve] SERVER ERROR: Service Temporarily Unavailable url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.pom
[ivy:resolve] SERVER ERROR: Service Temporarily Unavailable url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common-test/0.23.0-SNAPSHOT/hadoop-common-test-0.23.0-SNAPSHOT.jar
[ivy:resolve] SERVER ERROR: Service Temporarily Unavailable url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.pom
[ivy:resolve] SERVER ERROR: Service Temporarily Unavailable url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-SNAPSHOT.jar
[ivy:resolve]
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:2271: impossible to resolve dependencies:
resolve failed - see output for details
Total time: 14 minutes 58 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2602
Updating MAPREDUCE-2622
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 748 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/748/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236042 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-25 14:34:28,758 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,758 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,759 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,759 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,759 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,760 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,760 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,760 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,761 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,761 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,761 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,762 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,762 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,762 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,763 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,763 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,763 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,764 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,764 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-25 14:34:28,764 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.087 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.37 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.313 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 90 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2575
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7v(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 747 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/747/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236717 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-24 14:32:51,968 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,968 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,968 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,969 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,969 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,970 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,970 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,970 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,971 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,971 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,971 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,972 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,972 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,972 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,973 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,973 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,973 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,974 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,974 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-24 14:32:51,974 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.171 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.319 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 89 minutes 45 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7v(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 746 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/746/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 237601 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-23 14:33:16,843 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,844 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,844 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,844 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,845 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,845 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,845 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,846 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,846 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,846 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,847 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,847 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,847 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,848 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,848 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,848 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,849 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,849 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,849 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-23 14:33:16,850 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.125 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.334 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.308 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 89 minutes 50 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7v(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 745 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/745/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 234832 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-22 14:33:22,713 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,713 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,714 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,714 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,714 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,715 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,715 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,715 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,716 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,716 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,716 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,717 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,717 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,717 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,718 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,718 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,718 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,719 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,719 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-22 14:33:22,719 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.195 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.344 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.296 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 90 minutes 0 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2156
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7v(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 744 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/744/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235128 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-21 14:36:11,215 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,222 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,222 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,223 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-21 14:36:11,223 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.208 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.368 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 90 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2409
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7v(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 743 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/743/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 234049 lines...]
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-20 14:36:54,720 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,721 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,721 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,722 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,722 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,723 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,723 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,723 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,724 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,724 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,724 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,725 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,725 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,725 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,727 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,727 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-20 14:36:54,728 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.184 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.371 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.299 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 91 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2711
Updating HDFS-2161
Updating MAPREDUCE-2710
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7b(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 742 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/742/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6871 lines...]
compile-test:
[echo] contrib: raid
[javac] Compiling 20 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:84: blockManager is not public in org.apache.hadoop.hdfs.server.namenode.FSNamesystem; cannot be accessed from outside package
[javac] namesystem.blockManager.replicator instanceof BlockPlacementPolicyRaid);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:85: blockManager is not public in org.apache.hadoop.hdfs.server.namenode.FSNamesystem; cannot be accessed from outside package
[javac] policy = (BlockPlacementPolicyRaid) namesystem.blockManager.replicator;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:272: cannot find symbol
[javac] symbol : variable clusterMap
[javac] location: class org.apache.hadoop.hdfs.server.namenode.FSNamesystem
[javac] conf, namesystem, namesystem.clusterMap));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:317: blockManager is not public in org.apache.hadoop.hdfs.server.namenode.FSNamesystem; cannot be accessed from outside package
[javac] namesystem.blockManager.replicator = policy;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:317: cannot assign a value to final variable replicator
[javac] namesystem.blockManager.replicator = policy;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:337: cannot find symbol
[javac] symbol : variable clusterMap
[javac] location: class org.apache.hadoop.hdfs.server.namenode.FSNamesystem
[javac] conf, namesystem, namesystem.clusterMap));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:454: cannot find symbol
[javac] symbol : variable clusterMap
[javac] location: class org.apache.hadoop.hdfs.server.namenode.FSNamesystem
[javac] policy.initialize(conf, namesystem, namesystem.clusterMap);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockPlacementPolicyRaid.java:501: blockManager is not public in org.apache.hadoop.hdfs.server.namenode.FSNamesystem; cannot be accessed from outside package
[javac] INodeFile inode = namesystem.blockManager.blocksMap.getINode(block
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 8 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1189: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:227: Compile failed; see the compiler error output for details.
Total time: 3 minutes 50 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2623
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 741 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/741/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235608 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-18 14:34:31,461 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,461 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,462 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,462 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,462 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,463 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,463 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,463 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,464 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,464 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,464 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,465 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,465 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,465 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,466 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,466 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,466 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,467 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,467 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-18 14:34:31,467 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.117 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.37 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 92 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7b(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 740 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/740/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 234634 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-17 14:34:29,084 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,085 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,085 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,085 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,086 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,086 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,087 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,087 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,087 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,088 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,088 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,089 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,089 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,089 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,090 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,090 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,090 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,091 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,091 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-17 14:34:29,092 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.162 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.319 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 91 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7b(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 739 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/739/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 234402 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-16 14:35:12,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,221 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,222 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,222 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,222 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-16 14:35:12,223 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.232 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.349 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.314 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 92 minutes 35 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7b(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 737 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/737/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235897 lines...]
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-14 14:50:00,005 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,006 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,006 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,007 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,007 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,007 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,008 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,008 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,008 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,009 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,009 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,009 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,010 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,010 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,010 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,011 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,011 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,011 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,012 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-14 14:50:00,012 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.133 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.369 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.292 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 107 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2670
Updating MAPREDUCE-2365
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testBlacklistedNodeDecommissioning
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw7b(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 736 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/736/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 233489 lines...]
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-13 14:33:22,437 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,438 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,438 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,438 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,439 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,439 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,439 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,440 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,440 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,440 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,441 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,441 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,442 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,442 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,442 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,443 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,443 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,443 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,444 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-13 14:33:22,444 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.129 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.331 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:848: Tests failed!
Total time: 90 minutes 41 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2400
Updating MAPREDUCE-2680
Updating MAPREDUCE-2682
Updating MAPREDUCE-2679
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw4n(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 735 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/735/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235426 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-12 14:33:46,318 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,318 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,319 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,319 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,319 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,320 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,320 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,320 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,321 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,321 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,321 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,321 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,322 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,322 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,322 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-12 14:33:46,324 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.148 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.342 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.31 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 0 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2606
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw4n(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 733 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/733/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236956 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-10 14:34:59,534 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,535 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,535 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,535 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,536 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,536 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,536 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,537 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,537 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,537 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,538 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,538 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,539 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,539 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,539 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,540 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,540 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,540 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,541 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-10 14:34:59,541 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.174 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.387 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.326 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 5 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 732 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/732/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235474 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-09 14:34:52,304 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,304 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,305 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,305 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,305 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,306 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,306 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,306 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,307 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,307 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,307 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,308 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,308 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,308 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,308 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,309 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,309 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,309 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,310 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-09 14:34:52,310 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.09 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2596
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 731 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/731/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 237213 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-08 14:34:50,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,054 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,054 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,054 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,055 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,055 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,055 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,056 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,056 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-08 14:34:50,056 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.147 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.365 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 39 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2249
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION: org.apache.hadoop.mapred.TestDebugScript.testDebugScript
Error Message:
Output file does not exists. DebugScript has not been run
Stack Trace:
junit.framework.AssertionFailedError: Output file does not exists. DebugScript has not been run
at org.apache.hadoop.mapred.TestDebugScript.verifyDebugScriptOutput(TestDebugScript.java:152)
at org.apache.hadoop.mapred.TestDebugScript.verifyDebugScriptOutput(TestDebugScript.java:136)
at org.apache.hadoop.mapred.TestDebugScript.__CLR3_0_2q37pw3rs8(TestDebugScript.java:124)
at org.apache.hadoop.mapred.TestDebugScript.testDebugScript(TestDebugScript.java:110)
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 730 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/730/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235944 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-07 14:34:52,450 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,450 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,450 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,451 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,451 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,451 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,452 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,452 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,452 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,453 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,453 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,454 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,454 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,454 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,455 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,455 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,455 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,455 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,456 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-07 14:34:52,456 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.144 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.338 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.304 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 51 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 729 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/729/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235170 lines...]
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-06 14:35:06,888 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,889 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,889 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,889 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,890 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,890 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,890 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,891 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,891 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,891 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,892 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,892 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,892 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,893 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,893 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,893 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,894 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,894 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,894 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-06 14:35:06,895 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.153 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.365 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.311 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 26 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2323
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 728 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/728/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236229 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-05 14:34:41,850 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,851 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,851 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,851 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,852 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,852 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,852 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,853 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,853 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,853 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,854 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,854 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,854 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,855 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,855 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,855 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,856 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,856 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,856 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-05 14:34:41,857 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.137 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.346 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.307 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 35 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 727 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/727/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235760 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-04 14:34:36,593 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,593 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,593 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,594 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,594 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,594 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,595 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,595 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,595 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,596 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,596 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,597 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,597 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,597 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,598 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,598 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,598 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,599 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,599 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-04 14:34:36,599 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.101 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.357 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.301 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 31 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateOverwrite
Error Message:
Cannnot read file. expected:<0> but was:<-1>
Stack Trace:
junit.framework.AssertionFailedError: Cannnot read file. expected:<0> but was:<-1>
at org.apache.hadoop.tools.TestCopyFiles.checkFiles(TestCopyFiles.java:175)
at org.apache.hadoop.tools.TestCopyFiles.checkFiles(TestCopyFiles.java:159)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2ddsv4zwwg(TestCopyFiles.java:428)
at org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateOverwrite(TestCopyFiles.java:395)
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 726 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/726/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 171606 lines...]
FATAL: Unable to delete script file /tmp/hudson4994119684688688348.sh
hudson.util.IOException2: remote file operation failed: /tmp/hudson4994119684688688348.sh at hudson.remoting.Channel@4bd9c06c:hadoop7
at hudson.FilePath.act(FilePath.java:754)
at hudson.FilePath.act(FilePath.java:740)
at hudson.FilePath.delete(FilePath.java:995)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:662)
at hudson.model.Build$RunnerImpl.build(Build.java:177)
at hudson.model.Build$RunnerImpl.doRun(Build.java:139)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429)
at hudson.model.Run.run(Run.java:1374)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Caused by: hudson.remoting.ChannelClosedException: channel is already closed
at hudson.remoting.Channel.send(Channel.java:480)
at hudson.remoting.Request.call(Request.java:105)
at hudson.remoting.Channel.call(Channel.java:661)
at hudson.FilePath.act(FilePath.java:747)
... 13 more
Caused by: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.Channel$ReaderThread.run(Channel.java:1019)
Caused by: java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2553)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at hudson.remoting.Channel$ReaderThread.run(Channel.java:1013)
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.Request.call(Request.java:137)
at hudson.remoting.Channel.call(Channel.java:661)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:158)
at $Proxy21.join(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:850)
at hudson.Launcher$ProcStarter.join(Launcher.java:336)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:662)
at hudson.model.Build$RunnerImpl.build(Build.java:177)
at hudson.model.Build$RunnerImpl.doRun(Build.java:139)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429)
at hudson.model.Run.run(Run.java:1374)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.Request.abort(Request.java:257)
at hudson.remoting.Channel.terminate(Channel.java:712)
at hudson.remoting.Channel$ReaderThread.run(Channel.java:1042)
Caused by: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.Channel$ReaderThread.run(Channel.java:1019)
Caused by: java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2553)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at hudson.remoting.Channel$ReaderThread.run(Channel.java:1013)
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 725 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/725/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235220 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-02 14:34:48,403 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,404 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,404 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,405 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,405 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,405 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,406 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,406 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,406 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,407 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,407 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,407 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,408 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,408 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,408 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,409 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,409 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,409 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,410 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-02 14:34:48,410 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.161 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.37 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.307 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 6 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 724 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/724/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235421 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-07-01 14:35:37,583 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,584 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,584 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,585 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,585 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,585 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,586 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,586 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,586 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,587 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,587 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,587 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,588 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,588 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,588 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,589 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,589 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,589 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,590 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-07-01 14:35:37,590 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.121 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.362 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.312 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 58 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 723 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/723/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235998 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-30 14:33:13,388 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,389 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,389 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,390 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,390 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,390 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,391 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,391 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,391 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,392 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,392 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,392 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,393 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,393 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,393 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,394 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,395 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-30 14:33:13,395 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.092 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.294 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 9 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 722 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/722/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235347 lines...]
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.1 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.387 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 56 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2104
Updating MAPREDUCE-2573
Updating MAPREDUCE-2487
Updating MAPREDUCE-2571
Updating MAPREDUCE-2576
Updating HDFS-2087
Updating MAPREDUCE-2554
Updating MAPREDUCE-2531
Updating MAPREDUCE-2550
Updating MAPREDUCE-2430
Updating HADOOP-7106
Updating MAPREDUCE-2559
Updating MAPREDUCE-2539
Updating MAPREDUCE-2452
Updating MAPREDUCE-2107
Updating MAPREDUCE-2455
Updating MAPREDUCE-2603
Updating MAPREDUCE-2469
Updating MAPREDUCE-2494
Updating HADOOP-7384
Updating MAPREDUCE-2624
Updating MAPREDUCE-2543
Updating MAPREDUCE-2544
Updating MAPREDUCE-2581
Updating MAPREDUCE-2529
Updating HDFS-2107
Updating MAPREDUCE-2620
Updating MAPREDUCE-2185
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 721 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/721/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236027 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-28 14:35:11,956 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,957 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,957 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,958 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,958 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,958 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,959 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,959 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,959 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,960 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,960 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,960 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,961 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,961 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,961 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,962 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,962 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,962 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,963 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-28 14:35:11,963 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.101 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.36 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.321 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 720 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/720/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235332 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-27 14:54:55,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,726 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,727 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,727 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,727 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,728 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,728 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,729 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,729 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,729 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,730 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,730 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,730 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,731 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,731 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,731 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,732 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,732 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-27 14:54:55,732 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.156 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.354 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.313 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 94 minutes 0 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 719 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/719/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236434 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-26 14:36:17,345 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,346 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,346 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,347 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,347 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,348 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,348 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,348 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,349 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,349 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,349 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,350 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,350 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,351 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,351 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,351 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,352 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,352 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,352 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-26 14:36:17,353 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.16 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.357 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.316 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 6 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 718 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/718/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235896 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-25 14:37:10,254 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,254 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,255 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,255 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,255 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,256 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,256 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,256 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,257 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,257 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,257 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,258 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,258 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,258 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,259 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,259 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,259 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,260 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,260 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-25 14:37:10,260 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.118 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.359 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.314 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 717 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/717/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4214 lines...]
[ivy:resolve] :: evicted modules:
[ivy:resolve] commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve] commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve] commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] commons-lang#commons-lang;2.4 by [commons-lang#commons-lang;2.5] in [common]
[ivy:resolve] commons-logging#commons-logging;1.1 by [commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve] commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] org.codehaus.jackson#jackson-mapper-asl;${jackson.version} by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [common]
[ivy:resolve] org.codehaus.jackson#jackson-core-asl;${jackson.version} by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [common]
[ivy:resolve] com.thoughtworks.paranamer#paranamer;${paranamer.version} by [com.thoughtworks.paranamer#paranamer;2.2] in [common]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| common | 53 | 2 | 0 | 10 || 43 | 0 |
---------------------------------------------------------------------
ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#raid [sync]
[ivy:retrieve] confs: [common]
[ivy:retrieve] 43 artifacts copied, 0 already retrieved (18595kB/72ms)
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivysettings.xml
compile:
[echo] contrib: raid
[javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783: cannot find symbol
[javac] symbol : method opWriteBlock(java.io.DataOutputStream,org.apache.hadoop.hdfs.protocol.ExtendedBlock,int,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,int,java.lang.String,<nulltype>,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier>)
[javac] location: class org.apache.hadoop.hdfs.protocol.datatransfer.Sender
[javac] Sender.opWriteBlock(out, block.getBlock(), 1,
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:450: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:194: Compile failed; see the compiler error output for details.
Total time: 3 minutes 11 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 716 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/716/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236559 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-23 14:34:50,045 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,045 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,046 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,046 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,047 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,047 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,048 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,048 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,049 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,049 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-23 14:34:50,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.352 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.379 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.305 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 715 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/715/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236828 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-22 14:35:50,225 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,225 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,225 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,226 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,226 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,226 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,227 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,227 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,227 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,227 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,228 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,228 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,228 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,228 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,229 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,229 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,229 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,229 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,230 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-22 14:35:50,230 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.364 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.359 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.348 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 92 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 714 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/714/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235364 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-21 14:50:23,878 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,879 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,879 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,879 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,880 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,880 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,880 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,881 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,881 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,881 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,882 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,882 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,882 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,883 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,883 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,883 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,884 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,884 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,884 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-21 14:50:23,885 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.324 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.369 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.277 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 106 minutes 7 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testBlacklistedNodeDecommissioning
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 713 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/713/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236302 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-20 14:34:56,668 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,669 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,669 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,670 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,670 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,670 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,671 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,671 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,671 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,672 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,672 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,672 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,673 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,673 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,673 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,674 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,674 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,674 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,675 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-20 14:34:56,675 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.103 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 712 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/712/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 234830 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-19 14:35:29,322 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,322 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,323 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,324 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,324 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,325 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,325 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,325 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,326 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,326 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,326 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,327 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,327 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,327 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,328 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,328 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,329 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-19 14:35:29,329 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.188 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.36 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.282 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 28 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 711 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/711/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 237387 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-17 14:36:05,118 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,119 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,119 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,119 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,120 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,120 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,120 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,121 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,121 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,121 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,122 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,123 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,124 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,124 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,124 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-17 14:36:05,125 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.132 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.368 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.338 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 91 minutes 44 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 710 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/710/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235236 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-14 14:35:58,994 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,994 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,995 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,995 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,995 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,996 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,996 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,996 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,997 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,997 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,997 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,997 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,998 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,998 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,998 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,999 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,999 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,999 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-14 14:35:58,999 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-14 14:35:59,000 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.12 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.391 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.326 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 709 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/709/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8 lines...]
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:264)
at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:516)
at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:98)
at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1001)
at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:178)
at org.tmatesoft.svn.core.wc.SVNBasicClient.getRevisionNumber(SVNBasicClient.java:482)
at org.tmatesoft.svn.core.wc.SVNBasicClient.getLocations(SVNBasicClient.java:873)
at org.tmatesoft.svn.core.wc.SVNBasicClient.createRepository(SVNBasicClient.java:534)
at org.tmatesoft.svn.core.wc.SVNUpdateClient.doCheckout(SVNUpdateClient.java:901)
at hudson.scm.subversion.CheckoutUpdater$1.perform(CheckoutUpdater.java:83)
at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:135)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:726)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:707)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:691)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1979)
at hudson.remoting.UserRequest.perform(UserRequest.java:114)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:270)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: OPTIONS /repos/asf/hadoop/common/trunk/mapreduce failed
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
... 26 more
Caused by: org.tmatesoft.svn.core.SVNAuthenticationException: svn: OPTIONS request failed on '/repos/asf/hadoop/common/trunk/mapreduce'
svn: OPTIONS of /repos/asf/hadoop/common/trunk/mapreduce: 403 Forbidden (http://svn.apache.org)
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:62)
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:638)
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:285)
... 25 more
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: OPTIONS request failed on '/repos/asf/hadoop/common/trunk/mapreduce'
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at org.tmatesoft.svn.core.SVNErrorMessage.wrap(SVNErrorMessage.java:366)
... 27 more
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: OPTIONS of /repos/asf/hadoop/common/trunk/mapreduce: 403 Forbidden (http://svn.apache.org)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:181)
at org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:133)
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.createDefaultErrorMessage(HTTPRequest.java:430)
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:187)
at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:364)
... 26 more
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 708 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/708/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4208 lines...]
compile:
[echo] contrib: raid
[javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java:35: package org.apache.hadoop.hdfs.protocol.DataTransferProtocol does not exist
[javac] import org.apache.hadoop.hdfs.protocol.DataTransferProtocol.PacketHeader;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:46: cannot find symbol
[javac] symbol : class DataTransferProtocol
[javac] location: package org.apache.hadoop.hdfs.protocol
[javac] import org.apache.hadoop.hdfs.protocol.DataTransferProtocol;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java:250: cannot find symbol
[javac] symbol : class PacketHeader
[javac] location: class org.apache.hadoop.hdfs.server.datanode.RaidBlockSender
[javac] PacketHeader header = new PacketHeader(
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java:250: cannot find symbol
[javac] symbol : class PacketHeader
[javac] location: class org.apache.hadoop.hdfs.server.datanode.RaidBlockSender
[javac] PacketHeader header = new PacketHeader(
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java:379: cannot find symbol
[javac] symbol : variable PacketHeader
[javac] location: class org.apache.hadoop.hdfs.server.datanode.RaidBlockSender
[javac] int pktSize = PacketHeader.PKT_HEADER_LEN;
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:784: package DataTransferProtocol does not exist
[javac] DataTransferProtocol.
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783: package DataTransferProtocol does not exist
[javac] DataTransferProtocol.Sender.opWriteBlock(out, block.getBlock(), 1,
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 7 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:450: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:194: Compile failed; see the compiler error output for details.
Total time: 3 minutes 19 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Mapreduce-trunk - Build # 707 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/707/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 126953 lines...]
[junit] 0.8:96530
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-11 14:04:13,046 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,046 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,047 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,047 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,048 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,048 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,048 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,049 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,049 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,049 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,050 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,051 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,052 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-11 14:04:13,053 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.091 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.398 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.331 sec
checkfailure:
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 58 minutes 51 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
106 tests failed.
REGRESSION: org.apache.hadoop.conf.TestNoDefaultsJobConf.testNoDefaults
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:155)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.examples.terasort.TestTeraSort.testTeraSort
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:155)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.fs.TestDFSIO.testIOs
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:997)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:268)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:315)
at org.apache.hadoop.fs.TestDFSIO.createControlFile(TestDFSIO.java:222)
at org.apache.hadoop.fs.TestDFSIO.testIOs(TestDFSIO.java:186)
at org.apache.hadoop.fs.TestDFSIO.__CLR3_0_27nynfmgt1(TestDFSIO.java:168)
at org.apache.hadoop.fs.TestDFSIO.testIOs(TestDFSIO.java:166)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.ipc.TestSocketFactory.testSocketFactory
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.ipc.TestSocketFactory.__CLR3_0_2slf4ukjzk(TestSocketFactory.java:89)
at org.apache.hadoop.ipc.TestSocketFactory.testSocketFactory(TestSocketFactory.java:47)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestBadRecords.testBadMapRed
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduce
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduceRestarting
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testDFSRestart
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMRConfig
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapred.TestCommandLineJobSubmission.testJobShell
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.mapred.TestCommandLineJobSubmission.__CLR3_0_25qkf647bl(TestCommandLineJobSubmission.java:55)
at org.apache.hadoop.mapred.TestCommandLineJobSubmission.testJobShell(TestCommandLineJobSubmission.java:45)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapred.TestCompressedEmptyMapOutputs.testMapReduceSortWithCompressedEmptyMapOutputs
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestCompressedEmptyMapOutputs.__CLR3_0_2hykch7ujk(TestCompressedEmptyMapOutputs.java:109)
at org.apache.hadoop.mapred.TestCompressedEmptyMapOutputs.testMapReduceSortWithCompressedEmptyMapOutputs(TestCompressedEmptyMapOutputs.java:99)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestFileInputFormat.testLocality
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.mapred.TestFileInputFormat.createInputs(TestFileInputFormat.java:95)
at org.apache.hadoop.mapred.TestFileInputFormat.__CLR3_0_2b6vakkhfu(TestFileInputFormat.java:56)
at org.apache.hadoop.mapred.TestFileInputFormat.testLocality(TestFileInputFormat.java:48)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapred.TestFileInputFormat.testNumInputs
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.mapred.TestFileInputFormat.createInputs(TestFileInputFormat.java:95)
at org.apache.hadoop.mapred.TestFileInputFormat.__CLR3_0_2xfe8n8hgx(TestFileInputFormat.java:114)
at org.apache.hadoop.mapred.TestFileInputFormat.testNumInputs(TestFileInputFormat.java:104)
REGRESSION: org.apache.hadoop.mapred.TestFileInputFormat.testMultiLevelInput
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.mapred.TestFileInputFormat.writeFile(TestFileInputFormat.java:193)
at org.apache.hadoop.mapred.TestFileInputFormat.__CLR3_0_29v537whhi(TestFileInputFormat.java:166)
at org.apache.hadoop.mapred.TestFileInputFormat.testMultiLevelInput(TestFileInputFormat.java:152)
REGRESSION: org.apache.hadoop.mapred.TestJobClient.testJobClient
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobClient.testMissingProfileOutput
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapred.TestJobDirCleanup.testJobDirCleanup
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestJobDirCleanup.__CLR3_0_2y1yph3u6s(TestJobDirCleanup.java:61)
at org.apache.hadoop.mapred.TestJobDirCleanup.testJobDirCleanup(TestJobDirCleanup.java:48)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobHistory.testDoneFolderOnHDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestJobHistory.runDoneFolderTest(TestJobHistory.java:651)
at org.apache.hadoop.mapred.TestJobHistory.__CLR3_0_215eykxlo7(TestJobHistory.java:611)
at org.apache.hadoop.mapred.TestJobHistory.testDoneFolderOnHDFS(TestJobHistory.java:610)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobHistory.testDoneFolderNotOnDefaultFileSystem
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestJobHistory.runDoneFolderTest(TestJobHistory.java:651)
at org.apache.hadoop.mapred.TestJobHistory.__CLR3_0_2hg75lvlo9(TestJobHistory.java:615)
at org.apache.hadoop.mapred.TestJobHistory.testDoneFolderNotOnDefaultFileSystem(TestJobHistory.java:614)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobName.testComplexName
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobName.testComplexNameWithRegex
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapred.TestJobQueueInformation.testJobQueues
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.TestJobQueueInformation.setUp(TestJobQueueInformation.java:90)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobStatusPersistency.testNonPersistency
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestJobStatusPersistency.__CLR3_0_2sy1hpdo00(TestJobStatusPersistency.java:76)
at org.apache.hadoop.mapred.TestJobStatusPersistency.testNonPersistency(TestJobStatusPersistency.java:75)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobStatusPersistency.testPersistency
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestJobStatusPersistency.__CLR3_0_2rjopw8o0b(TestJobStatusPersistency.java:92)
at org.apache.hadoop.mapred.TestJobStatusPersistency.testPersistency(TestJobStatusPersistency.java:88)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobStatusPersistency.testLocalPersistency
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestJobStatusPersistency.__CLR3_0_2sz0ooxo13(TestJobStatusPersistency.java:133)
at org.apache.hadoop.mapred.TestJobStatusPersistency.testLocalPersistency(TestJobStatusPersistency.java:123)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestJobSysDirWithDFS.__CLR3_0_2xybd4wicw(TestJobSysDirWithDFS.java:130)
at org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS(TestJobSysDirWithDFS.java:119)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestJobTrackerXmlJsp.testXmlWellFormed
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestLazyOutput.__CLR3_0_2a4qckuxfs(TestLazyOutput.java:146)
at org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput(TestLazyOutput.java:136)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMapredHeartbeat.testOutOfBandHeartbeats
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestMapredHeartbeat.__CLR3_0_2ag5s9mkct(TestMapredHeartbeat.java:90)
at org.apache.hadoop.mapred.TestMapredHeartbeat.testOutOfBandHeartbeats(TestMapredHeartbeat.java:79)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy6.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMapredSystemDir.testGarbledMapredSystemDir
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.TestMapredSystemDir.__CLR3_0_2y2jwptkec(TestMapredSystemDir.java:80)
at org.apache.hadoop.mapred.TestMapredSystemDir.testGarbledMapredSystemDir(TestMapredSystemDir.java:52)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRChildTask.testTaskTempDir
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRChildTask.setUp(TestMiniMRChildTask.java:318)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRChildTask.testTaskEnv
Error Message:
Exception in testing child env
Stack Trace:
junit.framework.AssertionFailedError: Exception in testing child env
at org.apache.hadoop.mapred.TestMiniMRChildTask.__CLR3_0_2q3iv99jag(TestMiniMRChildTask.java:387)
at org.apache.hadoop.mapred.TestMiniMRChildTask.testTaskEnv(TestMiniMRChildTask.java:376)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRChildTask.testTaskOldEnv
Error Message:
Exception in testing child env
Stack Trace:
junit.framework.AssertionFailedError: Exception in testing child env
at org.apache.hadoop.mapred.TestMiniMRChildTask.__CLR3_0_2v9k6wsjar(TestMiniMRChildTask.java:409)
at org.apache.hadoop.mapred.TestMiniMRChildTask.testTaskOldEnv(TestMiniMRChildTask.java:398)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRClasspath.__CLR3_0_26048c2h5d(TestMiniMRClasspath.java:176)
at org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath(TestMiniMRClasspath.java:163)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRClasspath.__CLR3_0_2aab2u4h63(TestMiniMRClasspath.java:210)
at org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable(TestMiniMRClasspath.java:195)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRDFSCaching.testWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRDFSCaching.__CLR3_0_2xybd4wg22(TestMiniMRDFSCaching.java:41)
at org.apache.hadoop.mapred.TestMiniMRDFSCaching.testWithDFS(TestMiniMRDFSCaching.java:33)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
FAILED: org.apache.hadoop.mapred.TestMiniMRDFSSort$1.unknown
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRDFSSort$1.setUp(TestMiniMRDFSSort.java:67)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRWithDFS.testWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRWithDFS.__CLR3_0_2xybd4wlgc(TestMiniMRWithDFS.java:290)
at org.apache.hadoop.mapred.TestMiniMRWithDFS.testWithDFS(TestMiniMRWithDFS.java:280)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRWithDFS.testWithDFSWithDefaultPort
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestMiniMRWithDFS.__CLR3_0_2enfsjalgx(TestMiniMRWithDFS.java:316)
at org.apache.hadoop.mapred.TestMiniMRWithDFS.testWithDFSWithDefaultPort(TestMiniMRWithDFS.java:304)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testDistinctUsers
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:95)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testMultipleSpills
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:76)
REGRESSION: org.apache.hadoop.mapred.TestMultipleLevelCaching.testMultiLevelCaching
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:997)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:268)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:315)
at org.apache.hadoop.mapred.UtilsForTests.writeFile(UtilsForTests.java:488)
at org.apache.hadoop.mapred.TestMultipleLevelCaching.testCachingAtLevel(TestMultipleLevelCaching.java:100)
at org.apache.hadoop.mapred.TestMultipleLevelCaching.__CLR3_0_274mfplxrw(TestMultipleLevelCaching.java:74)
at org.apache.hadoop.mapred.TestMultipleLevelCaching.testMultiLevelCaching(TestMultipleLevelCaching.java:72)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshDefault
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.TestNodeRefresh.startCluster(TestNodeRefresh.java:114)
at org.apache.hadoop.mapred.TestNodeRefresh.__CLR3_0_2th2bz8vxq(TestNodeRefresh.java:163)
at org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshDefault(TestNodeRefresh.java:159)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testMRSuperUsers
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.mapred.TestNodeRefresh.__CLR3_0_2uywus5vyk(TestNodeRefresh.java:226)
at org.apache.hadoop.mapred.TestNodeRefresh.testMRSuperUsers(TestNodeRefresh.java:217)
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshDecommissioning
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.mapred.TestNodeRefresh.__CLR3_0_2x3pj7dvzl(TestNodeRefresh.java:293)
at org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshDecommissioning(TestNodeRefresh.java:286)
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshRecommissioning
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.mapred.TestNodeRefresh.__CLR3_0_2mevwe1w0c(TestNodeRefresh.java:376)
at org.apache.hadoop.mapred.TestNodeRefresh.testMRRefreshRecommissioning(TestNodeRefresh.java:334)
REGRESSION: org.apache.hadoop.mapred.TestNodeRefresh.testBlacklistedNodeDecommissioning
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.mapred.TestNodeRefresh.__CLR3_0_2a21altw1x(TestNodeRefresh.java:439)
at org.apache.hadoop.mapred.TestNodeRefresh.testBlacklistedNodeDecommissioning(TestNodeRefresh.java:431)
REGRESSION: org.apache.hadoop.mapred.TestRecoveryManager.testJobTrackerInfoCreation
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:414)
at org.apache.hadoop.mapred.JobTracker$RecoveryManager.updateRestartCount(JobTracker.java:1131)
at org.apache.hadoop.mapred.TestRecoveryManager.__CLR3_0_2v7pfi7evo(TestRecoveryManager.java:304)
at org.apache.hadoop.mapred.TestRecoveryManager.testJobTrackerInfoCreation(TestRecoveryManager.java:285)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
FAILED: org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.unknown
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:60)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
FAILED: org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.unknown
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:60)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestSetupAndCleanupFailure.__CLR3_0_2xybd4wlc4(TestSetupAndCleanupFailure.java:239)
at org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:226)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
FAILED: org.apache.hadoop.mapred.TestSeveral$1.unknown
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.TestSeveral$1.setUp(TestSeveral.java:108)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.__CLR3_0_2s239fdjfr(TestSpecialCharactersInOutputPath.java:113)
at org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS(TestSpecialCharactersInOutputPath.java:101)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestSubmitJob.testSecureJobExecution
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.TestSubmitJob.__CLR3_0_2jrhwd1b46(TestSubmitJob.java:215)
at org.apache.hadoop.mapred.TestSubmitJob.testSecureJobExecution(TestSubmitJob.java:197)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy6.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestTaskFail.testWithDFS
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapred.TestTaskFail.__CLR3_0_2xybd4w8al(TestTaskFail.java:214)
at org.apache.hadoop.mapred.TestTaskFail.testWithDFS(TestTaskFail.java:204)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestWebUIAuthorization.testAuthorizationForJobHistoryPages
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestWebUIAuthorization.startCluster(TestWebUIAuthorization.java:480)
at org.apache.hadoop.mapred.TestWebUIAuthorization.__CLR3_0_21xxnoydar(TestWebUIAuthorization.java:318)
at org.apache.hadoop.mapred.TestWebUIAuthorization.testAuthorizationForJobHistoryPages(TestWebUIAuthorization.java:299)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorization
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestWebUIAuthorization.startCluster(TestWebUIAuthorization.java:480)
at org.apache.hadoop.mapred.TestWebUIAuthorization.__CLR3_0_293wwomdet(TestWebUIAuthorization.java:686)
at org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorization(TestWebUIAuthorization.java:668)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorizationForCommonServlets
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.TestWebUIAuthorization.__CLR3_0_2xc69zadg8(TestWebUIAuthorization.java:762)
at org.apache.hadoop.mapred.TestWebUIAuthorization.testWebUIAuthorizationForCommonServlets(TestWebUIAuthorization.java:752)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapred.join.TestDatamerge.testSimpleInnerJoin
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapred.join.TestDatamerge.createWriters(TestDatamerge.java:83)
at org.apache.hadoop.mapred.join.TestDatamerge.writeSimpleSrc(TestDatamerge.java:94)
at org.apache.hadoop.mapred.join.TestDatamerge.joinAs(TestDatamerge.java:234)
at org.apache.hadoop.mapred.join.TestDatamerge.__CLR3_0_239fxnh81y(TestDatamerge.java:250)
at org.apache.hadoop.mapred.join.TestDatamerge.testSimpleInnerJoin(TestDatamerge.java:249)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapred.join.TestDatamerge.testSimpleOuterJoin
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapred.join.TestDatamerge.createWriters(TestDatamerge.java:83)
at org.apache.hadoop.mapred.join.TestDatamerge.writeSimpleSrc(TestDatamerge.java:94)
at org.apache.hadoop.mapred.join.TestDatamerge.joinAs(TestDatamerge.java:234)
at org.apache.hadoop.mapred.join.TestDatamerge.__CLR3_0_2abzcwo820(TestDatamerge.java:254)
at org.apache.hadoop.mapred.join.TestDatamerge.testSimpleOuterJoin(TestDatamerge.java:253)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapred.join.TestDatamerge.testSimpleOverride
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapred.join.TestDatamerge.createWriters(TestDatamerge.java:83)
at org.apache.hadoop.mapred.join.TestDatamerge.writeSimpleSrc(TestDatamerge.java:94)
at org.apache.hadoop.mapred.join.TestDatamerge.joinAs(TestDatamerge.java:234)
at org.apache.hadoop.mapred.join.TestDatamerge.__CLR3_0_2a19all822(TestDatamerge.java:258)
at org.apache.hadoop.mapred.join.TestDatamerge.testSimpleOverride(TestDatamerge.java:257)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapred.join.TestDatamerge.testNestedJoin
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapred.join.TestDatamerge.createWriters(TestDatamerge.java:83)
at org.apache.hadoop.mapred.join.TestDatamerge.__CLR3_0_2fiduje824(TestDatamerge.java:275)
at org.apache.hadoop.mapred.join.TestDatamerge.testNestedJoin(TestDatamerge.java:261)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapred.join.TestDatamerge.testEmptyJoin
Error Message:
Job failed!
Stack Trace:
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:781)
at org.apache.hadoop.mapred.join.TestDatamerge.__CLR3_0_2f6b0b084o(TestDatamerge.java:367)
at org.apache.hadoop.mapred.join.TestDatamerge.testEmptyJoin(TestDatamerge.java:353)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.testSplitting
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.getPath(TestDelegatingInputFormat.java:105)
at org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.__CLR3_0_29pbfvhii7(TestDelegatingInputFormat.java:47)
at org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.testSplitting(TestDelegatingInputFormat.java:38)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapred.pipes.TestPipes.testPipes
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.mapred.pipes.TestPipes.__CLR3_0_2pf4zqd0y(TestPipes.java:95)
at org.apache.hadoop.mapred.pipes.TestPipes.testPipes(TestPipes.java:69)
REGRESSION: org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:100)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:85)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapreduce.TestMRJobClient.testMissingProfileOutput
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
REGRESSION: org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.__CLR3_0_2a4qckuymi(TestMapReduceLazyOutput.java:136)
at org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput(TestMapReduceLazyOutput.java:126)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.writeFile(TestCombineFileInputFormat.java:683)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.__CLR3_0_2vt3rnoq86(TestCombineFileInputFormat.java:305)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:280)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.writeGzipFile(TestCombineFileInputFormat.java:694)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.__CLR3_0_2hpgwb3qhz(TestCombineFileInputFormat.java:735)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:709)
REGRESSION: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testMissingBlocks
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.writeFile(TestCombineFileInputFormat.java:683)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.__CLR3_0_2vlf5sxqpv(TestCombineFileInputFormat.java:1081)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testMissingBlocks(TestCombineFileInputFormat.java:1060)
REGRESSION: org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.testSplitting
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.getPath(TestDelegatingInputFormat.java:100)
at org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.__CLR3_0_29pbfvhiee(TestDelegatingInputFormat.java:45)
at org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.testSplitting(TestDelegatingInputFormat.java:36)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleInnerJoin
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.createWriters(TestJoinDatamerge.java:66)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.writeSimpleSrc(TestJoinDatamerge.java:77)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.joinAs(TestJoinDatamerge.java:254)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.__CLR3_0_239fxnhfcm(TestJoinDatamerge.java:276)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleInnerJoin(TestJoinDatamerge.java:275)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleOuterJoin
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.createWriters(TestJoinDatamerge.java:66)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.writeSimpleSrc(TestJoinDatamerge.java:77)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.joinAs(TestJoinDatamerge.java:254)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.__CLR3_0_2abzcwofco(TestJoinDatamerge.java:280)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleOuterJoin(TestJoinDatamerge.java:279)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleOverride
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.createWriters(TestJoinDatamerge.java:66)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.writeSimpleSrc(TestJoinDatamerge.java:77)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.joinAs(TestJoinDatamerge.java:254)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.__CLR3_0_2a19allfdn(TestJoinDatamerge.java:327)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testSimpleOverride(TestJoinDatamerge.java:326)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testNestedJoin
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.createWriters(TestJoinDatamerge.java:66)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.__CLR3_0_2fidujefdp(TestJoinDatamerge.java:344)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testNestedJoin(TestJoinDatamerge.java:330)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
REGRESSION: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testEmptyJoin
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.__CLR3_0_2f6b0b0fgb(TestJoinDatamerge.java:443)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge.testEmptyJoin(TestJoinDatamerge.java:425)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.mapreduce.lib.join.TestJoinProperties$1.unknown
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1029)
at org.apache.hadoop.mapreduce.lib.join.TestJoinProperties.createWriters(TestJoinProperties.java:75)
at org.apache.hadoop.mapreduce.lib.join.TestJoinProperties.generateSources(TestJoinProperties.java:99)
at org.apache.hadoop.mapreduce.lib.join.TestJoinProperties.access$100(TestJoinProperties.java:40)
at org.apache.hadoop.mapreduce.lib.join.TestJoinProperties$1.setUp(TestJoinProperties.java:55)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
FAILED: junit.framework.TestSuite.org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapreduce.security.TestBinaryTokenFile.setUp(TestBinaryTokenFile.java:146)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
FAILED: junit.framework.TestSuite.org.apache.hadoop.mapreduce.security.TestTokenCache
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapreduce.security.TestTokenCache.setUp(TestTokenCache.java:154)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
FAILED: junit.framework.TestSuite.org.apache.hadoop.mapreduce.security.TestTokenCacheOldApi
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapreduce.security.TestTokenCacheOldApi.setUp(TestTokenCacheOldApi.java:182)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.security.TestMapredGroupMappingServiceRefresh.testGroupMappingRefresh
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.security.TestMapredGroupMappingServiceRefresh.setUp(TestMapredGroupMappingServiceRefresh.java:113)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.security.TestMapredGroupMappingServiceRefresh.testRefreshSuperUserGroupsConfiguration
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.security.TestMapredGroupMappingServiceRefresh.setUp(TestMapredGroupMappingServiceRefresh.java:113)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.__CLR3_0_2b7v51dkgy(TestServiceLevelAuthorization.java:74)
at org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization(TestServiceLevelAuthorization.java:44)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyFromDfsToDfs
Error Message:
com/google/protobuf/MessageOrBuilder
Stack Trace:
java.lang.NoClassDefFoundError: com/google/protobuf/MessageOrBuilder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader.<clinit>(DataTransferProtocol.java:504)
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2e6yof1wtz(TestCopyFiles.java:287)
at org.apache.hadoop.tools.TestCopyFiles.testCopyFromDfsToDfs(TestCopyFiles.java:278)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.MessageOrBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testEmptyDir
Error Message:
Destination directory does not exist.
Stack Trace:
junit.framework.AssertionFailedError: Destination directory does not exist.
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2v5ob47wum(TestCopyFiles.java:327)
at org.apache.hadoop.tools.TestCopyFiles.testEmptyDir(TestCopyFiles.java:308)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyFromLocalToDfs
Error Message:
File does not exist: /destdat/3465903345130084506
Stack Trace:
java.io.FileNotFoundException: File does not exist: /destdat/3465903345130084506
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:758)
at org.apache.hadoop.tools.TestCopyFiles.checkFiles(TestCopyFiles.java:169)
at org.apache.hadoop.tools.TestCopyFiles.checkFiles(TestCopyFiles.java:159)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2hl9wyrwv9(TestCopyFiles.java:353)
at org.apache.hadoop.tools.TestCopyFiles.testCopyFromLocalToDfs(TestCopyFiles.java:339)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyFromDfsToLocal
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2o5d2y1wvu(TestCopyFiles.java:376)
at org.apache.hadoop.tools.TestCopyFiles.testCopyFromDfsToLocal(TestCopyFiles.java:367)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateOverwrite
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2ddsv4zwwg(TestCopyFiles.java:403)
at org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateOverwrite(TestCopyFiles.java:395)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateWithSkipCRC
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_29z237lwxd(TestCopyFiles.java:475)
at org.apache.hadoop.tools.TestCopyFiles.testCopyDfsToDfsUpdateWithSkipCRC(TestCopyFiles.java:455)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testBasedir
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2rlzv7zwzn(TestCopyFiles.java:621)
at org.apache.hadoop.tools.TestCopyFiles.testBasedir(TestCopyFiles.java:612)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testPreserveOption
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2p0v3lyx08(TestCopyFiles.java:647)
at org.apache.hadoop.tools.TestCopyFiles.testPreserveOption(TestCopyFiles.java:638)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testMapCount
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_28y0qdwx2d(TestCopyFiles.java:746)
at org.apache.hadoop.tools.TestCopyFiles.testMapCount(TestCopyFiles.java:736)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testLimits
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2gnl1gvx3e(TestCopyFiles.java:805)
at org.apache.hadoop.tools.TestCopyFiles.testLimits(TestCopyFiles.java:789)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testDelete
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2yilj0cx6l(TestCopyFiles.java:970)
at org.apache.hadoop.tools.TestCopyFiles.testDelete(TestCopyFiles.java:952)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testDeleteLocal
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_2tkhgbvx7p(TestCopyFiles.java:1033)
at org.apache.hadoop.tools.TestCopyFiles.testDeleteLocal(TestCopyFiles.java:1024)
REGRESSION: org.apache.hadoop.tools.TestCopyFiles.testGlobbing
Error Message:
Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PacketHeader
at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1282)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1237)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:747)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:725)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:706)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:605)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:594)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:144)
at org.apache.hadoop.tools.TestCopyFiles.createFile(TestCopyFiles.java:154)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:135)
at org.apache.hadoop.tools.TestCopyFiles.createFiles(TestCopyFiles.java:124)
at org.apache.hadoop.tools.TestCopyFiles.__CLR3_0_23k01ovx8d(TestCopyFiles.java:1066)
at org.apache.hadoop.tools.TestCopyFiles.testGlobbing(TestCopyFiles.java:1057)
REGRESSION: org.apache.hadoop.tools.TestDistCh.testDistCh
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.tools.TestDistCh.__CLR3_0_2oyf62kbpu(TestDistCh.java:133)
at org.apache.hadoop.tools.TestDistCh.testDistCh(TestDistCh.java:129)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.tools.TestHadoopArchives.testPathWithSpaces
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.tools.TestHadoopArchives.setUp(TestHadoopArchives.java:79)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.tools.TestHarFileSystem.testRelativeArchives
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:467)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:459)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:449)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:439)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:430)
at org.apache.hadoop.tools.TestHarFileSystem.setUp(TestHarFileSystem.java:61)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy12.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
REGRESSION: org.apache.hadoop.tools.TestHarFileSystem.testArchivesWithMapred
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.tools.TestHarFileSystem.setUp(TestHarFileSystem.java:59)
REGRESSION: org.apache.hadoop.tools.TestHarFileSystem.testGetFileBlockLocations
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.tools.TestHarFileSystem.setUp(TestHarFileSystem.java:59)
REGRESSION: org.apache.hadoop.tools.TestHarFileSystem.testSpaces
Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:638)
at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1260)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:600)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1516)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:242)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:605)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:520)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:466)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:346)
at org.apache.hadoop.tools.TestHarFileSystem.setUp(TestHarFileSystem.java:59)
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.cli.TestMRCLI.setUp(TestMRCLI.java:48)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1079)
at org.apache.hadoop.ipc.Client.call(Client.java:1055)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at $Proxy15.getClusterMetrics(Unknown Source)
at org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:200)
at org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:677)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:323)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:440)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:528)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:209)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
at org.apache.hadoop.ipc.Client.call(Client.java:1032)
Hadoop-Mapreduce-trunk - Build # 706 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/706/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 236962 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-10 14:35:19,607 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,608 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,608 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,608 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,609 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,609 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,609 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,610 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,610 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,610 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,610 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,611 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,611 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,611 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,612 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,612 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,612 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,613 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,613 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-10 14:35:19,613 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.117 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.358 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.309 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)
Hadoop-Mapreduce-trunk - Build # 705 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/705/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 235627 lines...]
[junit] 0.85:96549
[junit] 0.9:96658
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-06-09 14:34:14,267 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,267 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,268 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,268 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,268 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,269 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,269 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,269 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,270 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,270 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,270 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,271 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,271 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,271 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,272 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,272 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,272 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,273 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,273 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-06-09 14:34:14,273 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(330)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.129 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.337 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.309 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:847: Tests failed!
Total time: 90 minutes 24 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.fs.TestFileSystem.testCommandFormat
Error Message:
Too many arguments: expected 2 but got 3
Stack Trace:
org.apache.hadoop.fs.shell.CommandFormat$TooManyArgumentsException: Too many arguments: expected 2 but got 3
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:113)
at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:77)
at org.apache.hadoop.fs.TestFileSystem.__CLR3_0_2b0mwvrw5m(TestFileSystem.java:97)
at org.apache.hadoop.fs.TestFileSystem.testCommandFormat(TestFileSystem.java:92)
FAILED: org.apache.hadoop.cli.TestMRCLI.testAll
Error Message:
One of the tests failed. See the Detailed results to identify the command that failed
Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)
at org.apache.hadoop.cli.TestMRCLI.tearDown(TestMRCLI.java:56)