You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by rajeshbabu chintaguntla <ra...@huawei.com> on 2013/02/09 14:04:35 UTC
maximum capacity of queue is not effecting beyond its capacity
Hi,
When I run a job its getting hanged bacause its not able to get free memory resources to map/reduce task containers.
Total Memory avialable : 8 GB
schedular configured : CapacitySchedular
queue path cofigured : a
Application master got started on 'a' queue path by consuming 1.5 GB
Afterthat no container is getting launched because its not getting free memory from 'a' queue even free memory(max - cosumed) is sufficient.
When I debugged I observed queue capacity requirements are satified but user capacity requirements are not satified.
There is some problem I think. I am not able to follow the logic behind it. Please help me in this case.
capacity-schedular.xml - configurations
========================
<configurations>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>a,b</value>
<description>The queues at the this level (root is the root queue).
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.capacity</name>
<value>100<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.a.capacity</name>
<value>5<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.a.maximum-capacity</name>
<value>60<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.b.capacity</name>
<value>95<value>
</property>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>8<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.acl_submit_jobs</name>
<value>*<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.a.acl_submit_jobs</name>
<value>*<value>
</property>
<property>
<name>yarn.scheduler.capacity.root.b.acl_submit_jobs</name>
<value>*<value>
</property>
</configurations>
Thanks and Regards,
Rajeshbabu.