You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Steve Lewis <lo...@gmail.com> on 2012/02/14 23:10:52 UTC

Al ljobs fail with the erros

All of my current jobs and the wordcount I used when learning are all
failing with the same error

: Error reading task outputhttp://
glados9.systemsbiology.net:50060/tasklog?plaintext=true&taskid=attempt_201202141331_0008_m_000001_2&filter=stdout

glados9 is a slave node and other slave nodes are listed as well

I have tried deleting the userlogs directory of all slave nodes without
success -
I see no useful logs and am at a real loss of what to do I suspect some
directory somewhere has too many entries but other than userlogs I am not
sure where else to look
-- 
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com

Re: Al ljobs fail with the erros

Posted by Ioan Eugen STAN <st...@gmail.com>.
On Mi 15 feb 2012 00:10:52 +0200, Steve Lewis wrote:
> All of my current jobs and the wordcount I used when learning are all
> failing with the same error
>
> : Error reading task outputhttp://
> glados9.systemsbiology.net:50060/tasklog?plaintext=true&taskid=attempt_201202141331_0008_m_000001_2&filter=stdout
>
> glados9 is a slave node and other slave nodes are listed as well
>
> I have tried deleting the userlogs directory of all slave nodes without
> success -
> I see no useful logs and am at a real loss of what to do I suspect some
> directory somewhere has too many entries but other than userlogs I am not
> sure where else to look

I've also seen that output, and it was related to the job failing to 
run (I think it was due to a different version of slf4j on the 
classpath; make sure you use the one hadoop is compiled with or a 
compatible one).

 Check the task tracker and job tracker logs on the data node where the 
job ran to see why it failed.