You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2007/06/02 00:20:15 UTC
[jira] Commented: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500876 ]
Hadoop QA commented on HADOOP-1452:
-----------------------------------
-1, could not apply patch.
The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12358741/1452.patch as a patch to trunk revision r543622.
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/232/console
Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.