You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Du Lam <de...@gmail.com> on 2014/05/27 02:52:20 UTC

setting maximum mapper concurrently running

is there any setting that can set on run time of job for maximum mapper
concurrently running ?

i know there is a jobtracker level parameter that can be set, but that will
be global parameter for every job.  Is it possible to set per job ?

Re: setting maximum mapper concurrently running

Posted by João Paulo Forny <jp...@gmail.com>.
The number of map and reduce slots on each TaskTracker node is controlled
by the *mapreduce.tasktracker.map.tasks.maximum*and
*mapreduce.tasktracker.reduce.tasks.maximum* Hadoop properties in the
mapred-site.xml file. If you change these settings, restart all of the
TaskTracker nodes.

I guess you can't change these settings for a specific job through a -D
parameter, since you'll need to restart the tasktracker.


2014-05-26 21:52 GMT-03:00 Du Lam <de...@gmail.com>:

> is there any setting that can set on run time of job for maximum mapper
> concurrently running ?
>
> i know there is a jobtracker level parameter that can be set, but that
> will be global parameter for every job.  Is it possible to set per job ?
>

Re: setting maximum mapper concurrently running

Posted by João Paulo Forny <jp...@gmail.com>.
The number of map and reduce slots on each TaskTracker node is controlled
by the *mapreduce.tasktracker.map.tasks.maximum*and
*mapreduce.tasktracker.reduce.tasks.maximum* Hadoop properties in the
mapred-site.xml file. If you change these settings, restart all of the
TaskTracker nodes.

I guess you can't change these settings for a specific job through a -D
parameter, since you'll need to restart the tasktracker.


2014-05-26 21:52 GMT-03:00 Du Lam <de...@gmail.com>:

> is there any setting that can set on run time of job for maximum mapper
> concurrently running ?
>
> i know there is a jobtracker level parameter that can be set, but that
> will be global parameter for every job.  Is it possible to set per job ?
>

Re: setting maximum mapper concurrently running

Posted by João Paulo Forny <jp...@gmail.com>.
The number of map and reduce slots on each TaskTracker node is controlled
by the *mapreduce.tasktracker.map.tasks.maximum*and
*mapreduce.tasktracker.reduce.tasks.maximum* Hadoop properties in the
mapred-site.xml file. If you change these settings, restart all of the
TaskTracker nodes.

I guess you can't change these settings for a specific job through a -D
parameter, since you'll need to restart the tasktracker.


2014-05-26 21:52 GMT-03:00 Du Lam <de...@gmail.com>:

> is there any setting that can set on run time of job for maximum mapper
> concurrently running ?
>
> i know there is a jobtracker level parameter that can be set, but that
> will be global parameter for every job.  Is it possible to set per job ?
>

Re: setting maximum mapper concurrently running

Posted by João Paulo Forny <jp...@gmail.com>.
The number of map and reduce slots on each TaskTracker node is controlled
by the *mapreduce.tasktracker.map.tasks.maximum*and
*mapreduce.tasktracker.reduce.tasks.maximum* Hadoop properties in the
mapred-site.xml file. If you change these settings, restart all of the
TaskTracker nodes.

I guess you can't change these settings for a specific job through a -D
parameter, since you'll need to restart the tasktracker.


2014-05-26 21:52 GMT-03:00 Du Lam <de...@gmail.com>:

> is there any setting that can set on run time of job for maximum mapper
> concurrently running ?
>
> i know there is a jobtracker level parameter that can be set, but that
> will be global parameter for every job.  Is it possible to set per job ?
>