You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/05/06 05:33:16 UTC

[jira] [Updated] (MAPREDUCE-4136) Hadoop streaming might succeed even through reducer fails

     [ https://issues.apache.org/jira/browse/MAPREDUCE-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer updated MAPREDUCE-4136:
----------------------------------------
    Labels: BB2015-05-TBR  (was: )

> Hadoop streaming might succeed even through reducer fails
> ---------------------------------------------------------
>
>                 Key: MAPREDUCE-4136
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4136
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/streaming
>    Affects Versions: 0.20.205.0
>            Reporter: Wouter de Bie
>              Labels: BB2015-05-TBR
>         Attachments: mapreduce-4136.patch
>
>
> Hadoop streaming can even succeed even though the reducer has failed. This happens when Hadoop calls {{PipeReducer.close()}}, but in the mean time the reducer has failed and the process has died. When {{clientOut_.flush()}} throws an {{IOException}} in {{PipeMapRed.mapRedFinish()}} this exception is caught but only logged. The exit status of the child process is never checked and task is marked as successful.
> I've attached a patch that seems to fix it for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)