You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by jiang licht <li...@yahoo.com> on 2010/10/21 03:01:29 UTC

specify number of mappers/reducers per node per job?

Is there a way to control maximum number of mappers or reducers per node per job? i.e. say I have a cluster, now I want to run a job such that on each node, no more than 2 mappers are expected to run at the same time (but maximum numbers of m/r slots on a node are some bigger numbers specified by "mapred.tasktracker.map.tasks.maximum", "mapred.tasktracker.reduce.tasks.maximum").

Thanks,

Michael


      

Re: specify number of mappers/reducers per node per job?

Posted by Allen Wittenauer <aw...@linkedin.com>.
http://wiki.apache.org/hadoop/LimitingTaskSlotUsage



Re: specify number of mappers/reducers per node per job?

Posted by jiang licht <li...@yahoo.com>.
Thanks, Harsh.

That's my suspect as well. It's part of what a schedule is responsible for, s.a. capacity scheduler. Want to make sure I didn't miss it if there is another node-wise property at job-level to limit resource available to a job in parallel with "mapred.tasktracker.*.tasks.maximum".

Thanks,

Michael

--- On Thu, 10/21/10, Harsh J <qw...@gmail.com> wrote:

From: Harsh J <qw...@gmail.com>
Subject: Re: specify number of mappers/reducers per node per job?
To: common-user@hadoop.apache.org
Date: Thursday, October 21, 2010, 1:03 AM

AFAIK there is no way to control this from a job submission perspective.
Maybe the scheduler concept in Hadoop MapReduce can help you.

--
Harsh J
http://www.harshj.com

On Oct 21, 2010 6:32 AM, "jiang licht" <li...@yahoo.com> wrote:

Is there a way to control maximum number of mappers or reducers per node per
job? i.e. say I have a cluster, now I want to run a job such that on each
node, no more than 2 mappers are expected to run at the same time (but
maximum numbers of m/r slots on a node are some bigger numbers specified by
"mapred.tasktracker.map.tasks.maximum",
"mapred.tasktracker.reduce.tasks.maximum").

Thanks,

Michael



      

Re: specify number of mappers/reducers per node per job?

Posted by Harsh J <qw...@gmail.com>.
AFAIK there is no way to control this from a job submission perspective.
Maybe the scheduler concept in Hadoop MapReduce can help you.

--
Harsh J
http://www.harshj.com

On Oct 21, 2010 6:32 AM, "jiang licht" <li...@yahoo.com> wrote:

Is there a way to control maximum number of mappers or reducers per node per
job? i.e. say I have a cluster, now I want to run a job such that on each
node, no more than 2 mappers are expected to run at the same time (but
maximum numbers of m/r slots on a node are some bigger numbers specified by
"mapred.tasktracker.map.tasks.maximum",
"mapred.tasktracker.reduce.tasks.maximum").

Thanks,

Michael