You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by wuchang <58...@qq.com> on 2017/06/26 02:24:12 UTC

Restrict the number of container can be run in Parallel in Yarn?

For a map reduce task , I have  restrict the memory of mapper and reducers by 

mapreduce.map.memory.mb=1G
mapreduce.reduce.memory.mb=4G
But this parameter just restrict the memory for each mapper or reducer task ,instead of the number of mappers or reducers which can be launched in parallel.when this task launched, it seems that it consumed about 50G memory , which is not what I want. I find that there are many containers launched in parallel。
What I want  is to restrict the memory or cpu resource  it can consume every moment, for example, the maximum memory of this application can use every moment is 10G.What can I do?

Re: Restrict the number of container can be run in Parallel in Yarn?

Posted by Akira Ajisaka <aa...@apache.org>.
Hi wuchang,

If you are using Hadoop 2.7+,
you can use the following parameters to limit the number of
simultaneously running map/reduce tasks per MapReduce application:

* mapreduce.job.running.map.limit (default: 0, for no limit)
* mapreduce.job.running.reduce.limit (default: 0, for no limit)

Regards,
Akira

On 2017/06/26 11:24, wuchang wrote:
> For a map reduce task , I have restrict the memory of mapper and reducers by
>
> mapreduce.map.memory.mb=1G
> mapreduce.reduce.memory.mb=4G
>
> But this parameter just restrict the memory for each mapper or reducer task ,instead of the number of mappers or reducers which can be launched in parallel.when this task launched, it seems that it consumed about 50G memory , which is not what I want. I find that there are many containers launched in parallel。
> What I want is to restrict the memory or cpu resource it can consume every moment, for example, the maximum memory of this application can use every moment is 10G.What can I do?

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org