You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by David Parks <da...@yahoo.com> on 2012/12/13 05:22:48 UTC

Shuffle's getMapOutput() fails with EofException, followed by IllegalStateException

I'm having exactly this problem, and it's causing my job to fail when I try
to process a larger amount of data (I'm attempting to process 30GB of
compressed CSVs and the entire job fails every time).

This issues is open for it:
https://issues.apache.org/jira/browse/MAPREDUCE-5

Anyone have any idea about a workaround for the problem? To my eyes Hadoop
is just crashing when I try to process a large job (v1.0.3 on Elastic
MapReduce). But this just seems crazy, there must be something I can do to
get things working.

The only difference between what is stated in that bug report and mine is
that some of my map tasks fail at the end, but I believe that is due to the
reduce tasks causing problems because the map tasks are just timing out
without much more information than that.

Description (copied from JIRA):
---------------------------------------
During the shuffle phase, I'm seeing a large sequence of the following
actions:
1) WARN org.apache.hadoop.mapred.TaskTracker:
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
2) WARN org.mortbay.log: Committed before 410
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
3) ERROR org.mortbay.log: /mapOutput java.lang.IllegalStateException:
Committed
The map phase completes with 100%, and then the reduce phase crawls along
with the above errors in each of the TaskTracker logs. None of the
tasktrackers get lost. When I run non-data jobs like the 'pi' test from the
example jar, everything works fine.




RE: Shuffle's getMapOutput() fails with EofException, followed by IllegalStateException

Posted by David Parks <da...@yahoo.com>.
If anyone follows this thread in the future, it turns out that I was being
lead astray by these errors, they weren't the cause of the problem. This was
the resolution:

http://stackoverflow.com/questions/9803939/why-is-reduce-stuck-at-16/9815715
#comment19074114_9815715

I was messing with the filesystem directly and was leaving a connection to
it open which was hanging the map tasks (without error) that used that code.



-----Original Message-----
From: David Parks [mailto:davidparks21@yahoo.com] 
Sent: Thursday, December 13, 2012 11:23 AM
To: user@hadoop.apache.org
Subject: Shuffle's getMapOutput() fails with EofException, followed by
IllegalStateException

I'm having exactly this problem, and it's causing my job to fail when I try
to process a larger amount of data (I'm attempting to process 30GB of
compressed CSVs and the entire job fails every time).

This issues is open for it:
https://issues.apache.org/jira/browse/MAPREDUCE-5

Anyone have any idea about a workaround for the problem? To my eyes Hadoop
is just crashing when I try to process a large job (v1.0.3 on Elastic
MapReduce). But this just seems crazy, there must be something I can do to
get things working.

The only difference between what is stated in that bug report and mine is
that some of my map tasks fail at the end, but I believe that is due to the
reduce tasks causing problems because the map tasks are just timing out
without much more information than that.

Description (copied from JIRA):
---------------------------------------
During the shuffle phase, I'm seeing a large sequence of the following
actions:
1) WARN org.apache.hadoop.mapred.TaskTracker:
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
2) WARN org.mortbay.log: Committed before 410
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
3) ERROR org.mortbay.log: /mapOutput java.lang.IllegalStateException:
Committed
The map phase completes with 100%, and then the reduce phase crawls along
with the above errors in each of the TaskTracker logs. None of the
tasktrackers get lost. When I run non-data jobs like the 'pi' test from the
example jar, everything works fine.




RE: Shuffle's getMapOutput() fails with EofException, followed by IllegalStateException

Posted by David Parks <da...@yahoo.com>.
If anyone follows this thread in the future, it turns out that I was being
lead astray by these errors, they weren't the cause of the problem. This was
the resolution:

http://stackoverflow.com/questions/9803939/why-is-reduce-stuck-at-16/9815715
#comment19074114_9815715

I was messing with the filesystem directly and was leaving a connection to
it open which was hanging the map tasks (without error) that used that code.



-----Original Message-----
From: David Parks [mailto:davidparks21@yahoo.com] 
Sent: Thursday, December 13, 2012 11:23 AM
To: user@hadoop.apache.org
Subject: Shuffle's getMapOutput() fails with EofException, followed by
IllegalStateException

I'm having exactly this problem, and it's causing my job to fail when I try
to process a larger amount of data (I'm attempting to process 30GB of
compressed CSVs and the entire job fails every time).

This issues is open for it:
https://issues.apache.org/jira/browse/MAPREDUCE-5

Anyone have any idea about a workaround for the problem? To my eyes Hadoop
is just crashing when I try to process a large job (v1.0.3 on Elastic
MapReduce). But this just seems crazy, there must be something I can do to
get things working.

The only difference between what is stated in that bug report and mine is
that some of my map tasks fail at the end, but I believe that is due to the
reduce tasks causing problems because the map tasks are just timing out
without much more information than that.

Description (copied from JIRA):
---------------------------------------
During the shuffle phase, I'm seeing a large sequence of the following
actions:
1) WARN org.apache.hadoop.mapred.TaskTracker:
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
2) WARN org.mortbay.log: Committed before 410
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
3) ERROR org.mortbay.log: /mapOutput java.lang.IllegalStateException:
Committed
The map phase completes with 100%, and then the reduce phase crawls along
with the above errors in each of the TaskTracker logs. None of the
tasktrackers get lost. When I run non-data jobs like the 'pi' test from the
example jar, everything works fine.




RE: Shuffle's getMapOutput() fails with EofException, followed by IllegalStateException

Posted by David Parks <da...@yahoo.com>.
If anyone follows this thread in the future, it turns out that I was being
lead astray by these errors, they weren't the cause of the problem. This was
the resolution:

http://stackoverflow.com/questions/9803939/why-is-reduce-stuck-at-16/9815715
#comment19074114_9815715

I was messing with the filesystem directly and was leaving a connection to
it open which was hanging the map tasks (without error) that used that code.



-----Original Message-----
From: David Parks [mailto:davidparks21@yahoo.com] 
Sent: Thursday, December 13, 2012 11:23 AM
To: user@hadoop.apache.org
Subject: Shuffle's getMapOutput() fails with EofException, followed by
IllegalStateException

I'm having exactly this problem, and it's causing my job to fail when I try
to process a larger amount of data (I'm attempting to process 30GB of
compressed CSVs and the entire job fails every time).

This issues is open for it:
https://issues.apache.org/jira/browse/MAPREDUCE-5

Anyone have any idea about a workaround for the problem? To my eyes Hadoop
is just crashing when I try to process a large job (v1.0.3 on Elastic
MapReduce). But this just seems crazy, there must be something I can do to
get things working.

The only difference between what is stated in that bug report and mine is
that some of my map tasks fail at the end, but I believe that is due to the
reduce tasks causing problems because the map tasks are just timing out
without much more information than that.

Description (copied from JIRA):
---------------------------------------
During the shuffle phase, I'm seeing a large sequence of the following
actions:
1) WARN org.apache.hadoop.mapred.TaskTracker:
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
2) WARN org.mortbay.log: Committed before 410
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
3) ERROR org.mortbay.log: /mapOutput java.lang.IllegalStateException:
Committed
The map phase completes with 100%, and then the reduce phase crawls along
with the above errors in each of the TaskTracker logs. None of the
tasktrackers get lost. When I run non-data jobs like the 'pi' test from the
example jar, everything works fine.




RE: Shuffle's getMapOutput() fails with EofException, followed by IllegalStateException

Posted by David Parks <da...@yahoo.com>.
If anyone follows this thread in the future, it turns out that I was being
lead astray by these errors, they weren't the cause of the problem. This was
the resolution:

http://stackoverflow.com/questions/9803939/why-is-reduce-stuck-at-16/9815715
#comment19074114_9815715

I was messing with the filesystem directly and was leaving a connection to
it open which was hanging the map tasks (without error) that used that code.



-----Original Message-----
From: David Parks [mailto:davidparks21@yahoo.com] 
Sent: Thursday, December 13, 2012 11:23 AM
To: user@hadoop.apache.org
Subject: Shuffle's getMapOutput() fails with EofException, followed by
IllegalStateException

I'm having exactly this problem, and it's causing my job to fail when I try
to process a larger amount of data (I'm attempting to process 30GB of
compressed CSVs and the entire job fails every time).

This issues is open for it:
https://issues.apache.org/jira/browse/MAPREDUCE-5

Anyone have any idea about a workaround for the problem? To my eyes Hadoop
is just crashing when I try to process a large job (v1.0.3 on Elastic
MapReduce). But this just seems crazy, there must be something I can do to
get things working.

The only difference between what is stated in that bug report and mine is
that some of my map tasks fail at the end, but I believe that is due to the
reduce tasks causing problems because the map tasks are just timing out
without much more information than that.

Description (copied from JIRA):
---------------------------------------
During the shuffle phase, I'm seeing a large sequence of the following
actions:
1) WARN org.apache.hadoop.mapred.TaskTracker:
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
2) WARN org.mortbay.log: Committed before 410
getMapOutput(attempt_200905181452_0002_m_000010_0,0) failed :
org.mortbay.jetty.EofException
3) ERROR org.mortbay.log: /mapOutput java.lang.IllegalStateException:
Committed
The map phase completes with 100%, and then the reduce phase crawls along
with the above errors in each of the TaskTracker logs. None of the
tasktrackers get lost. When I run non-data jobs like the 'pi' test from the
example jar, everything works fine.