You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2008/09/26 22:39:44 UTC

[jira] Commented: (HADOOP-4295) job-level configurable mapred.map.tasks.maximum and mapred.reduce.tasks.maximum

    [ https://issues.apache.org/jira/browse/HADOOP-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635004#action_12635004 ] 

Doug Cutting commented on HADOOP-4295:
--------------------------------------

I think these are appropriately set at the tasktracker level, since they're meant to correspond to the resources of the tasktracker, e.g., the number of cores.  If one has a mixed cluster, with some 2-core nodes and some 4-core nodes, then one might reasonably set these differently on different nodes.  The memory limits of HADOOP-2765 and HADOOP-4035 can be used to control things on a per-job basis.

> job-level configurable mapred.map.tasks.maximum and mapred.reduce.tasks.maximum 
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4295
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4295
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Christian Kunz
>
> Right now mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum are set on the tasktracker level.
> In absense of a smart tasktracker monitoring resources and deciding in an adaptive manner how many tasks can be run simultaneously, it would be nice to move these two configuration options to the job level. This would make it easier to optimize the performance of a batch of jobs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.