You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Chris Douglas (JIRA)" <ji...@apache.org> on 2009/04/10 05:51:13 UTC
[jira] Created: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
--------------------------------------------------------------------------------------------------------
Key: HADOOP-5652
URL: https://issues.apache.org/jira/browse/HADOOP-5652
Project: Hadoop Core
Issue Type: Bug
Components: mapred
Affects Versions: 0.20.0
Reporter: Chris Douglas
Priority: Minor
If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697946#action_12697946 ]
Chris Douglas commented on HADOOP-5652:
---------------------------------------
{noformat}
Test org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy FAILED
Test org.apache.hadoop.mapred.TestMRServerPorts FAILED
Test org.apache.hadoop.mapred.TestQueueCapacities FAILED
{noformat}
None of the test failures were caused by the patch.
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-5652:
----------------------------------
Attachment: 5652-1.patch
Better fix; should merge in-memory segments with the smallest on-disk segment
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Priority: Minor
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-5652:
----------------------------------
Fix Version/s: 0.21.0
Assignee: Chris Douglas
Status: Patch Available (was: Open)
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Hudson (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12700720#action_12700720 ]
Hudson commented on HADOOP-5652:
--------------------------------
Integrated in Hadoop-trunk #811 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/811/])
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697934#action_12697934 ]
Hadoop QA commented on HADOOP-5652:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12405143/5652-1.patch
against trunk revision 763728.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed core unit tests.
-1 contrib tests. The patch failed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/180/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/180/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/180/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/180/console
This message is automatically generated.
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12699244#action_12699244 ]
Devaraj Das commented on HADOOP-5652:
-------------------------------------
+1
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-5652:
----------------------------------
Resolution: Fixed
Hadoop Flags: [Reviewed]
Status: Resolved (was: Patch Available)
I committed this
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 5652-0.patch, 5652-1.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5652) Reduce does not respect in-memory
segment memory limit when number of on disk segments == io.sort.factor
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-5652:
----------------------------------
Attachment: 5652-0.patch
Easy fix
> Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor
> --------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5652
> URL: https://issues.apache.org/jira/browse/HADOOP-5652
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Priority: Minor
> Attachments: 5652-0.patch
>
>
> If the number of on-disk segments is exactly {{io.sort.factor}}, then map output segments may be left in memory for the reduce contrary to the specification in {{mapred.job.reduce.input.buffer.percent}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.