You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/03/16 18:42:39 UTC

[jira] [Updated] (MAPREDUCE-2779) JobSplitWriter.java can't handle large job.split file

     [ https://issues.apache.org/jira/browse/MAPREDUCE-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer updated MAPREDUCE-2779:
----------------------------------------
    Fix Version/s:     (was: 2.0.0-alpha)

> JobSplitWriter.java can't handle large job.split file
> -----------------------------------------------------
>
>                 Key: MAPREDUCE-2779
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2779
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: job submission
>    Affects Versions: 0.20.205.0, 0.22.0, 0.23.0
>            Reporter: Ming Ma
>            Assignee: Ming Ma
>             Fix For: 0.22.0, 0.23.0
>
>         Attachments: MAPREDUCE-2779-0.22.patch, MAPREDUCE-2779-trunk.patch, MAPREDUCE-2779-trunk.patch
>
>
> We use cascading MultiInputFormat. MultiInputFormat sometimes generates big job.split used internally by hadoop, sometimes it can go beyond 2GB.
> In JobSplitWriter.java, the function that generates such file uses 32bit signed integer to compute offset into job.split.
> writeNewSplits
> ...
>         int prevCount = out.size();
> ...
>         int currCount = out.size();
> writeOldSplits
> ...
>       long offset = out.size();
> ...
>       int currLen = out.size();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)