You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by abhishek sharma <ab...@gmail.com> on 2010/01/24 03:43:12 UTC

resolution to Hadoop error: mapred.JobClient: Error reading task output

Hi all,

I had sent a query yesterday asking about the following error

WARN mapred.JobClient: Error reading task
outputhttp://<machine.domainname>:50060/tasklog?plaintext=true&taskid=attempt_201001221644_0001_r_000001_2&filter=stdout
INFO mapred.JobClient: Task Id : attempt_201001221644_0001_r_000001_2,
Status : FAILED java.io.IOException: Task process exit with nonzero
status of 1. at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

I found the reason for it at
http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce

The answer is pasted below:

One reason Hadoop produces this error is when the directory containing
the log files becomes too full. This is a limit of the Ext3 filesystem
which only allows a maximum of 32000 links per inode.

Check how full your logs directory is in: {hadoop-home}/logs/userlogs

A simple test for this problem is to just try and create a directory
from the command-line for example: $ mkdir
{hadoop-home}/logs//userlogs/testdir

If you have too many directories in userlogs the OS should fail to
create the directory and report there are too many.

 Thanks,
Abhishek