You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Christian Kunz (JIRA)" <ji...@apache.org> on 2007/06/01 22:17:15 UTC
[jira] Created: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
map output transfers of more than 2^31 bytes output are failing
---------------------------------------------------------------
Key: HADOOP-1452
URL: https://issues.apache.org/jira/browse/HADOOP-1452
Project: Hadoop
Issue Type: Bug
Components: mapred
Affects Versions: 0.13.0
Reporter: Christian Kunz
Symptom:
WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
Besides failing to fetch data, the reduce will retry forever. This should be limited.
Source of the problem:
in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
...
int totalRead = 0;
int len = mapOutputIn.read(buffer, 0,
partLength < MAX_BYTES_TO_READ
? (int)partLength : MAX_BYTES_TO_READ);
while (len > 0) {
try {
outStream.write(buffer, 0, len);
outStream.flush();
} catch (IOException ie) {
isInputException = false;
throw ie;
}
totalRead += len;
if (totalRead == partLength) break;
len = mapOutputIn.read(buffer, 0,
(partLength - totalRead) < MAX_BYTES_TO_READ
? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
}
...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500942 ]
Hadoop QA commented on HADOOP-1452:
-----------------------------------
Integrated in Hadoop-Nightly #108 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/108/)
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500876 ]
Hadoop QA commented on HADOOP-1452:
-----------------------------------
-1, could not apply patch.
The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12358741/1452.patch as a patch to trunk revision r543622.
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/232/console
Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Owen O'Malley updated HADOOP-1452:
----------------------------------
Attachment: 1452.patch
This changes TaskTracker.MapOutputServlet.doGet.totalRead to a long.
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Owen O'Malley updated HADOOP-1452:
----------------------------------
Status: Patch Available (was: Open)
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Doug Cutting updated HADOOP-1452:
---------------------------------
Resolution: Fixed
Status: Resolved (was: Patch Available)
I just committed this. Thanks, Owen!
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: 1452.patch
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1452) map output transfers of more than
2^31 bytes output are failing
Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Owen O'Malley updated HADOOP-1452:
----------------------------------
Fix Version/s: 0.13.0
Assignee: Owen O'Malley
Priority: Blocker (was: Major)
> map output transfers of more than 2^31 bytes output are failing
> ---------------------------------------------------------------
>
> Key: HADOOP-1452
> URL: https://issues.apache.org/jira/browse/HADOOP-1452
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Christian Kunz
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.13.0
>
>
> Symptom:
> WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException: Incomplete map output received for http://<host>:50060/mapOutput?map=task_0026_m_000298_0&reduce=61 (2327458761 instead of 2327347307)
> WARN org.apache.hadoop.mapred.ReduceTask: task_0026_r_000061_0 adding host <host> to penalty box, next contact in 263 seconds
> Besides failing to fetch data, the reduce will retry forever. This should be limited.
> Source of the problem:
> in mapred/TaskTracker.java the variable totalRead keeping track what is sent to the reducer should be declared as long:
> ...
> int totalRead = 0;
> int len = mapOutputIn.read(buffer, 0,
> partLength < MAX_BYTES_TO_READ
> ? (int)partLength : MAX_BYTES_TO_READ);
> while (len > 0) {
> try {
> outStream.write(buffer, 0, len);
> outStream.flush();
> } catch (IOException ie) {
> isInputException = false;
> throw ie;
> }
> totalRead += len;
> if (totalRead == partLength) break;
> len = mapOutputIn.read(buffer, 0,
> (partLength - totalRead) < MAX_BYTES_TO_READ
> ? (int)(partLength - totalRead) : MAX_BYTES_TO_READ);
> }
> ...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.