You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Marcin Cylke <mc...@touk.pl> on 2012/01/31 12:21:44 UTC

hadoop-1.0.0 and errors with log.index

Hi

I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
went relatively smoothly but it rendered the cluster inoperable due to
errors in jobtrackers operation:

# in job output
Error reading task
outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_201201311241_0003_m_000004_2&filter=stdout

# in each of the jobtrackers' logs
WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log for
task: attempt_201201311241_0003_r_000000_1
java.io.FileNotFoundException:
/usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/attempt_201201311241_0003_r_000000_1/log.index 

(No such file or directory)
          at java.io.FileInputStream.open(Native Method)


These errors seem related to this two problems:

http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-reading-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvsgep4e

https://issues.apache.org/jira/browse/MAPREDUCE-2846

But I've looked into the source code and the fix from MAPREDUCE-2846 is
there. Perhaps there is some other reason?

Regards
Marcin

Re: hadoop-1.0.0 and errors with log.index

Posted by Koji Noguchi <kn...@yahoo-inc.com>.
On our cluster, it usually happen when jvm crash with invalid jvm params or
jni crashing at init phase.

stderr/stdout files are created but log.index does not exist when this
happens.

We should fix this.

Koji



On 1/31/12 10:49 AM, "Markus Jelsma" <ma...@openindex.io> wrote:

> Yes, the stacktrace in my previous message is from the task tracker. It seems
> to happen when there is no data locality for the mapper and it needs to get it
> from some other datanode. The number of failures is the same as the number of
> rack-local mappers.
> 
>> Anything in TaskTracker logs ?
>> 
>> On Jan 31, 2012, at 10:18 AM, Markus Jelsma wrote:
>>> In our case, which seems to be the same problem, the web UI does not show
>>> anything useful except the first line of the stack trace:
>>> 
>>> 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
>>> retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
>>> 
>>> Only the task tracker log shows a full stack trace. This happened on
>>> 1.0.0 and 0.20.205.0 but not 0.20.203.0.
>>> 
>>> 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
>>> retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
>>> java.io.FileNotFoundException:
>>> /opt/hadoop/hadoop-0.20.205.0/libexec/../logs/userlogs/job_201201031651_0
>>> 008/attempt_201201031651_0008_m_000233_0/log.index (No such file or
>>> directory)
>>> at java.io.FileInputStream.open(Native Method)
>>> at java.io.FileInputStream.(SecureIOUtils.java:102)
>>> at
>>> org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:187)
>>> at org.apache.hadoop.mapred.TaskLog$Reader.(TaskLogServlet.java:81)
>>> at
>>> org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>>> at
>>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>>> at
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHand
>>> ler.java:1221) at
>>> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.
>>> java:835) at
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHand
>>> ler.java:1212) at
>>> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>>> at
>>> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:21
>>> 6) at
>>> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>>> at
>>> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>>> at
>>> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>>> at
>>> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerC
>>> ollection.java:230) at
>>> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>>> at org.mortbay.jetty.Server.handle(Server.java:326)
>>> at
>>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>>> at
>>> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnec
>>> tion.java:928)
>>> 
>>>> Actually, all that is telling you is that the task failed and the
>>>> job-client couldn't display the logs.
>>>> 
>>>> Can you check the JT web-ui and see why the task failed ?
>>>> 
>>>> If you don't see anything there, you can try see the TaskTracker logs on
>>>> the node on which the task ran.
>>>> 
>>>> Arun
>>>> 
>>>> On Jan 31, 2012, at 3:21 AM, Marcin Cylke wrote:
>>>>> Hi
>>>>> 
>>>>> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
>>>>> went relatively smoothly but it rendered the cluster inoperable due to
>>>>> errors in jobtrackers operation:
>>>>> 
>>>>> # in job output
>>>>> Error reading task
>>>>> outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_201
>>>>> 20 1311241_0003_m_000004_2&filter=stdout
>>>>> 
>>>>> # in each of the jobtrackers' logs
>>>>> WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log
>>>>> for task: attempt_201201311241_0003_r_000000_1
>>>>> java.io.FileNotFoundException:
>>>>> /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/at
>>>>> te mpt_201201311241_0003_r_000000_1/log.index (No such file or
>>>>> directory)
>>>>> 
>>>>>        at java.io.FileInputStream.open(Native Method)
>>>>> 
>>>>> These errors seem related to this two problems:
>>>>> 
>>>>> http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-re
>>>>> ad
>>>>> ing-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvs
>>>>> gep 4e
>>>>> 
>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-2846
>>>>> 
>>>>> But I've looked into the source code and the fix from MAPREDUCE-2846 is
>>>>> there. Perhaps there is some other reason?
>>>>> 
>>>>> Regards
>>>>> Marcin
>>>> 
>>>> --
>>>> Arun C. Murthy
>>>> Hortonworks Inc.
>>>> http://hortonworks.com/
>> 
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/


Re: hadoop-1.0.0 and errors with log.index

Posted by Markus Jelsma <ma...@openindex.io>.
Yes, the stacktrace in my previous message is from the task tracker. It seems 
to happen when there is no data locality for the mapper and it needs to get it 
from some other datanode. The number of failures is the same as the number of 
rack-local mappers.

> Anything in TaskTracker logs ?
> 
> On Jan 31, 2012, at 10:18 AM, Markus Jelsma wrote:
> > In our case, which seems to be the same problem, the web UI does not show
> > anything useful except the first line of the stack trace:
> > 
> > 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
> > retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
> > 
> > Only the task tracker log shows a full stack trace. This happened on
> > 1.0.0 and 0.20.205.0 but not 0.20.203.0.
> > 
> > 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
> > retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
> > java.io.FileNotFoundException:
> > /opt/hadoop/hadoop-0.20.205.0/libexec/../logs/userlogs/job_201201031651_0
> > 008/attempt_201201031651_0008_m_000233_0/log.index (No such file or
> > directory)
> > at java.io.FileInputStream.open(Native Method)
> > at java.io.FileInputStream.(SecureIOUtils.java:102)
> > at
> > org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:187)
> > at org.apache.hadoop.mapred.TaskLog$Reader.(TaskLogServlet.java:81)
> > at
> > org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> > at
> > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> > at
> > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHand
> > ler.java:1221) at
> > org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.
> > java:835) at
> > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHand
> > ler.java:1212) at
> > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> > at
> > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:21
> > 6) at
> > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> > at
> > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> > at
> > org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> > at
> > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerC
> > ollection.java:230) at
> > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> > at org.mortbay.jetty.Server.handle(Server.java:326)
> > at
> > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> > at
> > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnec
> > tion.java:928)
> > 
> >> Actually, all that is telling you is that the task failed and the
> >> job-client couldn't display the logs.
> >> 
> >> Can you check the JT web-ui and see why the task failed ?
> >> 
> >> If you don't see anything there, you can try see the TaskTracker logs on
> >> the node on which the task ran.
> >> 
> >> Arun
> >> 
> >> On Jan 31, 2012, at 3:21 AM, Marcin Cylke wrote:
> >>> Hi
> >>> 
> >>> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
> >>> went relatively smoothly but it rendered the cluster inoperable due to
> >>> errors in jobtrackers operation:
> >>> 
> >>> # in job output
> >>> Error reading task
> >>> outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_201
> >>> 20 1311241_0003_m_000004_2&filter=stdout
> >>> 
> >>> # in each of the jobtrackers' logs
> >>> WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log
> >>> for task: attempt_201201311241_0003_r_000000_1
> >>> java.io.FileNotFoundException:
> >>> /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/at
> >>> te mpt_201201311241_0003_r_000000_1/log.index (No such file or
> >>> directory)
> >>> 
> >>>        at java.io.FileInputStream.open(Native Method)
> >>> 
> >>> These errors seem related to this two problems:
> >>> 
> >>> http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-re
> >>> ad
> >>> ing-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvs
> >>> gep 4e
> >>> 
> >>> https://issues.apache.org/jira/browse/MAPREDUCE-2846
> >>> 
> >>> But I've looked into the source code and the fix from MAPREDUCE-2846 is
> >>> there. Perhaps there is some other reason?
> >>> 
> >>> Regards
> >>> Marcin
> >> 
> >> --
> >> Arun C. Murthy
> >> Hortonworks Inc.
> >> http://hortonworks.com/
> 
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/

Re: [ mapreduce ] Re: hadoop-1.0.0 and errors with log.index

Posted by Marcin Cylke <mc...@touk.pl>.
On 31.01.2012 19:32, Arun C Murthy wrote:
> Anything in TaskTracker logs ?

Actually I've found something like this in jobs log. When running 
wordcount I got it in a file:

${result_dir}/_logs/history/job_201201311241_0008_1328055139152_hdfs_word+count

MapAttempt TASK_TYPE="SETUP" TASKID="task_201201311241_0008_m_000001" 
TASK_ATTEMPT_ID="attempt_201201311241_0008_m_000001_0" 
TASK_STATUS="FAILED" FINISH_TIME="1328049504887" HOSTNAME="hadoop2" 
ERROR="Error: Could not initialize class org\.apache\.log4j\.LogManager" 
.

I'm especially worried about this "Could not initialize class" error. 
Could it be related to the missing log.index file?


Marcin


Re: hadoop-1.0.0 and errors with log.index

Posted by Arun C Murthy <ac...@hortonworks.com>.
Anything in TaskTracker logs ?

On Jan 31, 2012, at 10:18 AM, Markus Jelsma wrote:

> In our case, which seems to be the same problem, the web UI does not show 
> anything useful except the first line of the stack trace:
> 
> 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
> retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
> 
> Only the task tracker log shows a full stack trace. This happened on 1.0.0 and 
> 0.20.205.0 but not 0.20.203.0.
> 
> 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
> retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
> java.io.FileNotFoundException:
> /opt/hadoop/hadoop-0.20.205.0/libexec/../logs/userlogs/job_201201031651_0008/attempt_201201031651_0008_m_000233_0/log.index
> (No such file or directory)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.(SecureIOUtils.java:102)
> at
> org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:187)
> at org.apache.hadoop.mapred.TaskLog$Reader.(TaskLogServlet.java:81)
> at
> org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
> at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 
> 
>> Actually, all that is telling you is that the task failed and the
>> job-client couldn't display the logs.
>> 
>> Can you check the JT web-ui and see why the task failed ?
>> 
>> If you don't see anything there, you can try see the TaskTracker logs on
>> the node on which the task ran.
>> 
>> Arun
>> 
>> On Jan 31, 2012, at 3:21 AM, Marcin Cylke wrote:
>>> Hi
>>> 
>>> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
>>> went relatively smoothly but it rendered the cluster inoperable due to
>>> errors in jobtrackers operation:
>>> 
>>> # in job output
>>> Error reading task
>>> outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_20120
>>> 1311241_0003_m_000004_2&filter=stdout
>>> 
>>> # in each of the jobtrackers' logs
>>> WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log for
>>> task: attempt_201201311241_0003_r_000000_1
>>> java.io.FileNotFoundException:
>>> /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/atte
>>> mpt_201201311241_0003_r_000000_1/log.index (No such file or directory)
>>> 
>>>        at java.io.FileInputStream.open(Native Method)
>>> 
>>> These errors seem related to this two problems:
>>> 
>>> http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-read
>>> ing-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvsgep
>>> 4e
>>> 
>>> https://issues.apache.org/jira/browse/MAPREDUCE-2846
>>> 
>>> But I've looked into the source code and the fix from MAPREDUCE-2846 is
>>> there. Perhaps there is some other reason?
>>> 
>>> Regards
>>> Marcin
>> 
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



Re: hadoop-1.0.0 and errors with log.index

Posted by Markus Jelsma <ma...@openindex.io>.
In our case, which seems to be the same problem, the web UI does not show 
anything useful except the first line of the stack trace:

2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201201031651_0008_m_000233_0

Only the task tracker log shows a full stack trace. This happened on 1.0.0 and 
0.20.205.0 but not 0.20.203.0.

2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
java.io.FileNotFoundException:
/opt/hadoop/hadoop-0.20.205.0/libexec/../logs/userlogs/job_201201031651_0008/attempt_201201031651_0008_m_000233_0/log.index
(No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(SecureIOUtils.java:102)
at
org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:187)
at org.apache.hadoop.mapred.TaskLog$Reader.(TaskLogServlet.java:81)
at
org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)


> Actually, all that is telling you is that the task failed and the
> job-client couldn't display the logs.
> 
> Can you check the JT web-ui and see why the task failed ?
> 
> If you don't see anything there, you can try see the TaskTracker logs on
> the node on which the task ran.
> 
> Arun
> 
> On Jan 31, 2012, at 3:21 AM, Marcin Cylke wrote:
> > Hi
> > 
> > I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
> > went relatively smoothly but it rendered the cluster inoperable due to
> > errors in jobtrackers operation:
> > 
> > # in job output
> > Error reading task
> > outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_20120
> > 1311241_0003_m_000004_2&filter=stdout
> > 
> > # in each of the jobtrackers' logs
> > WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log for
> > task: attempt_201201311241_0003_r_000000_1
> > java.io.FileNotFoundException:
> > /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/atte
> > mpt_201201311241_0003_r_000000_1/log.index (No such file or directory)
> > 
> >         at java.io.FileInputStream.open(Native Method)
> > 
> > These errors seem related to this two problems:
> > 
> > http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-read
> > ing-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvsgep
> > 4e
> > 
> > https://issues.apache.org/jira/browse/MAPREDUCE-2846
> > 
> > But I've looked into the source code and the fix from MAPREDUCE-2846 is
> > there. Perhaps there is some other reason?
> > 
> > Regards
> > Marcin
> 
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/

Re: hadoop-1.0.0 and errors with log.index

Posted by Arun C Murthy <ac...@hortonworks.com>.
Actually, all that is telling you is that the task failed and the job-client couldn't display the logs.

Can you check the JT web-ui and see why the task failed ?

If you don't see anything there, you can try see the TaskTracker logs on the node on which the task ran.

Arun

On Jan 31, 2012, at 3:21 AM, Marcin Cylke wrote:

> Hi
> 
> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
> went relatively smoothly but it rendered the cluster inoperable due to
> errors in jobtrackers operation:
> 
> # in job output
> Error reading task
> outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_201201311241_0003_m_000004_2&filter=stdout
> 
> # in each of the jobtrackers' logs
> WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log for
> task: attempt_201201311241_0003_r_000000_1
> java.io.FileNotFoundException:
> /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/attempt_201201311241_0003_r_000000_1/log.index 
> (No such file or directory)
>         at java.io.FileInputStream.open(Native Method)
> 
> 
> These errors seem related to this two problems:
> 
> http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-reading-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvsgep4e
> 
> https://issues.apache.org/jira/browse/MAPREDUCE-2846
> 
> But I've looked into the source code and the fix from MAPREDUCE-2846 is
> there. Perhaps there is some other reason?
> 
> Regards
> Marcin

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



Re: [ mapreduce ] Re: hadoop-1.0.0 and errors with log.index

Posted by Marcin Cylke <mc...@touk.pl>.
On 31/01/12 12:48, Markus Jelsma wrote:
> I've seen that the number of related failures is almost always the same as the
> number of rack-local mappers. Do you see this as well?

Yes, it seems that way.

Marcin


Re: hadoop-1.0.0 and errors with log.index

Posted by Markus Jelsma <ma...@openindex.io>.
I've seen that the number of related failures is almost always the same as the 
number of rack-local mappers. Do you see this as well?

On Tuesday 31 January 2012 12:21:44 Marcin Cylke wrote:
> Hi
> 
> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
> went relatively smoothly but it rendered the cluster inoperable due to
> errors in jobtrackers operation:
> 
> # in job output
> Error reading task
> outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_2012013
> 11241_0003_m_000004_2&filter=stdout
> 
> # in each of the jobtrackers' logs
> WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stderr log for
> task: attempt_201201311241_0003_r_000000_1
> java.io.FileNotFoundException:
> /usr/lib/hadoop-1.0.0/libexec/../logs/userlogs/job_201201311241_0003/attemp
> t_201201311241_0003_r_000000_1/log.index
> 
> (No such file or directory)
>           at java.io.FileInputStream.open(Native Method)
> 
> 
> These errors seem related to this two problems:
> 
> http://grokbase.com/t/hadoop.apache.org/mapreduce-user/2012/01/error-readin
> g-task-output-and-log-filenotfoundexceptions/03mjwctewcnxlgp2jkcrhvsgep4e
> 
> https://issues.apache.org/jira/browse/MAPREDUCE-2846
> 
> But I've looked into the source code and the fix from MAPREDUCE-2846 is
> there. Perhaps there is some other reason?
> 
> Regards
> Marcin

-- 
Markus Jelsma - CTO - Openindex