You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2009/11/10 21:53:27 UTC

[jira] Updated: (MAPREDUCE-576) writing to status reporter before consuming standard input causes task failure.

     [ https://issues.apache.org/jira/browse/MAPREDUCE-576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated MAPREDUCE-576:
----------------------------------

    Affects Version/s: 0.20.1

This bug still seems to exist in 0.20.1

> writing to status reporter before consuming standard input causes task failure.
> -------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-576
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-576
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/streaming
>    Affects Versions: 0.20.1
>         Environment: amazon ec2 instance created with the given scripts (fedora, small)
>            Reporter: Karl Anderson
>
> A Hadoop Streaming task which writes a status reporter line before consuming input causes the task to fail.  Writing after consuming input does not fail.
> I caused this failure using a Python reducer and writing a "reporter:status:foo\n" line to stderr.  Didn't try writing anything else.
> The reducer script which fails:
>   #!/usr/bin/env python
>   import sys
>   if __name__ == "__main__":
>       sys.stderr.write('reporter:status:foo\n')
>       sys.stderr.flush()
>       for line in sys.stdin:
>           print line
> The reducer script which succeeds:
>   #!/usr/bin/env python
>   import sys
>   if __name__ == "__main__":
>       for line in sys.stdin:
>           sys.stderr.write('reporter:status:foo\n')
>           sys.stderr.flush()
>           print line
> The hadoop invocation which I used:
> hadoop jar /usr/local/hadoop-0.18.1/contrib/streaming/hadoop-0.18.1-streaming.jar -mapper cat -reducer ./reducer_foo.py -input vectors -output clusters_1 -jobconf mapred.map.tasks=512 -jobconf mapred.reduce.tasks=512 -file ./reducer_foo.py
> This is on a 64 node hadoop-ec2 cluster.
> One of the errors listed on the failures page (they all appear to be the same):
> java.io.IOException: subprocess exited successfully
> R/W/S=1/0/0 in:0=1/41 [rec/s] out:0=0/41 [rec/s]
> minRecWrittenToEnableSkip_=9223372036854775807 LOGNAME=null
> HOST=null
> USER=root
> HADOOP_USER=null
> last Hadoop input: |null|
> last tool output: |null|
> Date: Mon Oct 20 19:13:38 EDT 2008
> MROutput/MRErrThread failed:java.lang.NullPointerException
> 	at org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.setStatus(PipeMapRed.java:497)
> 	at org.apache.hadoop.streaming.PipeMapRed$MRErrorThread.run(PipeMapRed.java:429)
> 	at org.apache.hadoop.streaming.PipeReducer.reduce(PipeReducer.java:103)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:318)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> The stderr log for a failed task:
> Exception in thread "Timer thread for monitoring mapred" java.lang.NullPointerException
> 	at org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaContext.java:195)
> 	at org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaContext.java:138)
> 	at org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaContext.java:123)
> 	at org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(AbstractMetricsContext.java:304)
> 	at org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(AbstractMetricsContext.java:290)
> 	at org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(AbstractMetricsContext.java:50)
> 	at org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetricsContext.java:249)
> 	at java.util.TimerThread.mainLoop(Timer.java:512)
> 	at java.util.TimerThread.run(Timer.java:462)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.