You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Nathan Roberts (JIRA)" <ji...@apache.org> on 2013/06/07 00:00:20 UTC

[jira] [Commented] (MAPREDUCE-5308) Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677574#comment-13677574 ] 

Nathan Roberts commented on MAPREDUCE-5308:
-------------------------------------------

Patch fixes this issue by attempting to read() one additional byte from the input. This should always return EOF, but the side-effect is that it forces the decompressor to finish processing all of the compressed stream. This allows the IFileInputStream to stay in-sync and be completely processed (e.g. verification of IFileInputStream checksum) as part of this map_id. If we do get something other than EOF then something is truly wrong and we should fail this particular map.
                
> Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs
> -------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5308
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5308
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: trunk, 2.0.3-alpha, 0.23.8
>            Reporter: Nathan Roberts
>            Assignee: Nathan Roberts
>         Attachments: MAPREDUCE-5308.patch
>
>
> When a reducer is fetching multiple compressed map outputs from a host, the fetcher can get out-of-sync with the IFileInputStream, causing several of the maps to fail to fetch.
> This occurs because decompressors can return all the decompressed bytes before actually processing all the bytes in the compressed stream (due to checksums or other trailing data that we ignore). In the unfortunate case where these extra bytes cross an io.file.buffer.size boundary, some extra bytes will be left over and the next map_output will not fetch correctly (usually due to an invalid map_id).
> This scenario is not typically fatal to a job because the failure is charged to the map_output immediately following the "bad" one and the subsequent retry will normally work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira