You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by sam liu <sa...@gmail.com> on 2013/07/03 05:33:12 UTC

Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Hi,

With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work
for me:
1. The performance of running same terasort job do not change, even after
increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
in yarn-site.xml and restart the yarn cluster.

2. Even if I set the value of both
'yarn.nodemanager.resource.cpu-cores' and
'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
complete without any exception, but the expected behavior should be that no
cpu could be assigned to the container, and then no job could be executed
on the cluster. Right?

Thanks!

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Hi Sandy,

Thanks to your detailed explanation! But I am still not very clear. In my
current cluster with Hadoop-2.0.3-alpha, how to enable the properties
'yarn.nodemanager.resource.cpu-cores' and
'yarn.nodemanager.vcores-pcores-ratio'  work for me? Or do they only works
well in 2.1.0-beta?



2013/7/23 Sandy Ryza <sa...@cloudera.com>

> Hi Sam,
>
> LinuxResourceCalculatorPlugin and DominantResourceCalculator control
> separate things.  The former is for a NodeManager to calculate the resource
> usage of a container process so that it can kill it if it gets too large.
>  The latter is used by the Capacity Scheduler to allocate containers, and,
> if you're using the Capacity Scheduler, in theory should do what you're
> expecting it to do.  Based on the fix version of YARN-2,
> DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
> Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.
>
> -Sandy
>
>
> On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:
>
>> Thanks, but seems it does not work for me.
>>
>> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include
>> a class named
>> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
>> replaced it with
>> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
>> configuration is below, and use the default values for other
>> LinuxContainerExecutor configurations. In my expectation, if I set the
>> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
>> completed by nodemanager as the slaves do not have any cpu to use. But, in
>> fact, my job could be completed successfully. Why?
>>
>>  <property>
>>     <name>yarn.nodemanager.resource.cpu-cores</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>
>> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>>
>> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.container-executor.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>>   </property>
>>
>>
>>
>> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>>
>>>  I think you need to change the following configurations in
>>> yarn-site.xml to enable CPU resource limits.****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-executor.class’****
>>>
>>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>>
>>> ** **
>>>
>>> Some LinuxContainerExecutor configurations:****
>>>
>>> yarn.nodemanager.linux-container-executor.path****
>>>
>>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>>
>>> ** **
>>>
>>> -Chuan****
>>>
>>> ** **
>>>
>>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work
>>> ****
>>>
>>> ** **
>>>
>>> Hi,****
>>>
>>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>>> work for me:****
>>>
>>> 1. The performance of running same terasort job do not change, even
>>> after increasing or decreasing the value of
>>> 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn
>>> cluster.  ****
>>>
>>> 2. Even if I set the value of both
>>> 'yarn.nodemanager.resource.cpu-cores' and
>>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>>> complete without any exception, but the expected behavior should be that no
>>> cpu could be assigned to the container, and then no job could be executed
>>> on the cluster. Right?****
>>>
>>> Thanks!****
>>>
>>> ** **
>>>
>>
>>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Hi Sandy,

Thanks to your detailed explanation! But I am still not very clear. In my
current cluster with Hadoop-2.0.3-alpha, how to enable the properties
'yarn.nodemanager.resource.cpu-cores' and
'yarn.nodemanager.vcores-pcores-ratio'  work for me? Or do they only works
well in 2.1.0-beta?



2013/7/23 Sandy Ryza <sa...@cloudera.com>

> Hi Sam,
>
> LinuxResourceCalculatorPlugin and DominantResourceCalculator control
> separate things.  The former is for a NodeManager to calculate the resource
> usage of a container process so that it can kill it if it gets too large.
>  The latter is used by the Capacity Scheduler to allocate containers, and,
> if you're using the Capacity Scheduler, in theory should do what you're
> expecting it to do.  Based on the fix version of YARN-2,
> DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
> Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.
>
> -Sandy
>
>
> On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:
>
>> Thanks, but seems it does not work for me.
>>
>> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include
>> a class named
>> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
>> replaced it with
>> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
>> configuration is below, and use the default values for other
>> LinuxContainerExecutor configurations. In my expectation, if I set the
>> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
>> completed by nodemanager as the slaves do not have any cpu to use. But, in
>> fact, my job could be completed successfully. Why?
>>
>>  <property>
>>     <name>yarn.nodemanager.resource.cpu-cores</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>
>> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>>
>> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.container-executor.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>>   </property>
>>
>>
>>
>> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>>
>>>  I think you need to change the following configurations in
>>> yarn-site.xml to enable CPU resource limits.****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-executor.class’****
>>>
>>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>>
>>> ** **
>>>
>>> Some LinuxContainerExecutor configurations:****
>>>
>>> yarn.nodemanager.linux-container-executor.path****
>>>
>>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>>
>>> ** **
>>>
>>> -Chuan****
>>>
>>> ** **
>>>
>>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work
>>> ****
>>>
>>> ** **
>>>
>>> Hi,****
>>>
>>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>>> work for me:****
>>>
>>> 1. The performance of running same terasort job do not change, even
>>> after increasing or decreasing the value of
>>> 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn
>>> cluster.  ****
>>>
>>> 2. Even if I set the value of both
>>> 'yarn.nodemanager.resource.cpu-cores' and
>>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>>> complete without any exception, but the expected behavior should be that no
>>> cpu could be assigned to the container, and then no job could be executed
>>> on the cluster. Right?****
>>>
>>> Thanks!****
>>>
>>> ** **
>>>
>>
>>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Hi Sandy,

Thanks to your detailed explanation! But I am still not very clear. In my
current cluster with Hadoop-2.0.3-alpha, how to enable the properties
'yarn.nodemanager.resource.cpu-cores' and
'yarn.nodemanager.vcores-pcores-ratio'  work for me? Or do they only works
well in 2.1.0-beta?



2013/7/23 Sandy Ryza <sa...@cloudera.com>

> Hi Sam,
>
> LinuxResourceCalculatorPlugin and DominantResourceCalculator control
> separate things.  The former is for a NodeManager to calculate the resource
> usage of a container process so that it can kill it if it gets too large.
>  The latter is used by the Capacity Scheduler to allocate containers, and,
> if you're using the Capacity Scheduler, in theory should do what you're
> expecting it to do.  Based on the fix version of YARN-2,
> DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
> Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.
>
> -Sandy
>
>
> On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:
>
>> Thanks, but seems it does not work for me.
>>
>> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include
>> a class named
>> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
>> replaced it with
>> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
>> configuration is below, and use the default values for other
>> LinuxContainerExecutor configurations. In my expectation, if I set the
>> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
>> completed by nodemanager as the slaves do not have any cpu to use. But, in
>> fact, my job could be completed successfully. Why?
>>
>>  <property>
>>     <name>yarn.nodemanager.resource.cpu-cores</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>
>> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>>
>> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.container-executor.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>>   </property>
>>
>>
>>
>> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>>
>>>  I think you need to change the following configurations in
>>> yarn-site.xml to enable CPU resource limits.****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-executor.class’****
>>>
>>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>>
>>> ** **
>>>
>>> Some LinuxContainerExecutor configurations:****
>>>
>>> yarn.nodemanager.linux-container-executor.path****
>>>
>>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>>
>>> ** **
>>>
>>> -Chuan****
>>>
>>> ** **
>>>
>>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work
>>> ****
>>>
>>> ** **
>>>
>>> Hi,****
>>>
>>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>>> work for me:****
>>>
>>> 1. The performance of running same terasort job do not change, even
>>> after increasing or decreasing the value of
>>> 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn
>>> cluster.  ****
>>>
>>> 2. Even if I set the value of both
>>> 'yarn.nodemanager.resource.cpu-cores' and
>>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>>> complete without any exception, but the expected behavior should be that no
>>> cpu could be assigned to the container, and then no job could be executed
>>> on the cluster. Right?****
>>>
>>> Thanks!****
>>>
>>> ** **
>>>
>>
>>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Hi Sandy,

Thanks to your detailed explanation! But I am still not very clear. In my
current cluster with Hadoop-2.0.3-alpha, how to enable the properties
'yarn.nodemanager.resource.cpu-cores' and
'yarn.nodemanager.vcores-pcores-ratio'  work for me? Or do they only works
well in 2.1.0-beta?



2013/7/23 Sandy Ryza <sa...@cloudera.com>

> Hi Sam,
>
> LinuxResourceCalculatorPlugin and DominantResourceCalculator control
> separate things.  The former is for a NodeManager to calculate the resource
> usage of a container process so that it can kill it if it gets too large.
>  The latter is used by the Capacity Scheduler to allocate containers, and,
> if you're using the Capacity Scheduler, in theory should do what you're
> expecting it to do.  Based on the fix version of YARN-2,
> DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
> Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.
>
> -Sandy
>
>
> On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:
>
>> Thanks, but seems it does not work for me.
>>
>> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include
>> a class named
>> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
>> replaced it with
>> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
>> configuration is below, and use the default values for other
>> LinuxContainerExecutor configurations. In my expectation, if I set the
>> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
>> completed by nodemanager as the slaves do not have any cpu to use. But, in
>> fact, my job could be completed successfully. Why?
>>
>>  <property>
>>     <name>yarn.nodemanager.resource.cpu-cores</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>>     <value>0</value>
>>   </property>
>>
>>  <property>
>>
>> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>>
>> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>>   </property>
>>
>>  <property>
>>     <name>yarn.nodemanager.container-executor.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>>   </property>
>>
>>
>>
>> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>>
>>>  I think you need to change the following configurations in
>>> yarn-site.xml to enable CPU resource limits.****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>>
>>> ** **
>>>
>>> ‘yarn.nodemanager.container-executor.class’****
>>>
>>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>>
>>> ** **
>>>
>>> Some LinuxContainerExecutor configurations:****
>>>
>>> yarn.nodemanager.linux-container-executor.path****
>>>
>>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>>
>>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>>
>>> ** **
>>>
>>> -Chuan****
>>>
>>> ** **
>>>
>>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work
>>> ****
>>>
>>> ** **
>>>
>>> Hi,****
>>>
>>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>>> work for me:****
>>>
>>> 1. The performance of running same terasort job do not change, even
>>> after increasing or decreasing the value of
>>> 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn
>>> cluster.  ****
>>>
>>> 2. Even if I set the value of both
>>> 'yarn.nodemanager.resource.cpu-cores' and
>>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>>> complete without any exception, but the expected behavior should be that no
>>> cpu could be assigned to the container, and then no job could be executed
>>> on the cluster. Right?****
>>>
>>> Thanks!****
>>>
>>> ** **
>>>
>>
>>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Sandy Ryza <sa...@cloudera.com>.
Hi Sam,

LinuxResourceCalculatorPlugin and DominantResourceCalculator control
separate things.  The former is for a NodeManager to calculate the resource
usage of a container process so that it can kill it if it gets too large.
 The latter is used by the Capacity Scheduler to allocate containers, and,
if you're using the Capacity Scheduler, in theory should do what you're
expecting it to do.  Based on the fix version of YARN-2,
DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.

-Sandy


On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:

> Thanks, but seems it does not work for me.
>
> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
> class named
> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
> replaced it with
> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
> configuration is below, and use the default values for other
> LinuxContainerExecutor configurations. In my expectation, if I set the
> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
> completed by nodemanager as the slaves do not have any cpu to use. But, in
> fact, my job could be completed successfully. Why?
>
>  <property>
>     <name>yarn.nodemanager.resource.cpu-cores</name>
>     <value>0</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>     <value>0</value>
>   </property>
>
>  <property>
>
> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>
> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.container-executor.class</name>
>
> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>   </property>
>
>
>
> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>
>>  I think you need to change the following configurations in
>> yarn-site.xml to enable CPU resource limits.****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-executor.class’****
>>
>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>
>> ** **
>>
>> Some LinuxContainerExecutor configurations:****
>>
>> yarn.nodemanager.linux-container-executor.path****
>>
>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>
>> ** **
>>
>> -Chuan****
>>
>> ** **
>>
>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work*
>> ***
>>
>> ** **
>>
>> Hi,****
>>
>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>> work for me:****
>>
>> 1. The performance of running same terasort job do not change, even after
>> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
>> in yarn-site.xml and restart the yarn cluster.  ****
>>
>> 2. Even if I set the value of both
>> 'yarn.nodemanager.resource.cpu-cores' and
>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>> complete without any exception, but the expected behavior should be that no
>> cpu could be assigned to the container, and then no job could be executed
>> on the cluster. Right?****
>>
>> Thanks!****
>>
>> ** **
>>
>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Sandy Ryza <sa...@cloudera.com>.
Hi Sam,

LinuxResourceCalculatorPlugin and DominantResourceCalculator control
separate things.  The former is for a NodeManager to calculate the resource
usage of a container process so that it can kill it if it gets too large.
 The latter is used by the Capacity Scheduler to allocate containers, and,
if you're using the Capacity Scheduler, in theory should do what you're
expecting it to do.  Based on the fix version of YARN-2,
DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.

-Sandy


On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:

> Thanks, but seems it does not work for me.
>
> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
> class named
> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
> replaced it with
> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
> configuration is below, and use the default values for other
> LinuxContainerExecutor configurations. In my expectation, if I set the
> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
> completed by nodemanager as the slaves do not have any cpu to use. But, in
> fact, my job could be completed successfully. Why?
>
>  <property>
>     <name>yarn.nodemanager.resource.cpu-cores</name>
>     <value>0</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>     <value>0</value>
>   </property>
>
>  <property>
>
> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>
> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.container-executor.class</name>
>
> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>   </property>
>
>
>
> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>
>>  I think you need to change the following configurations in
>> yarn-site.xml to enable CPU resource limits.****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-executor.class’****
>>
>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>
>> ** **
>>
>> Some LinuxContainerExecutor configurations:****
>>
>> yarn.nodemanager.linux-container-executor.path****
>>
>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>
>> ** **
>>
>> -Chuan****
>>
>> ** **
>>
>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work*
>> ***
>>
>> ** **
>>
>> Hi,****
>>
>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>> work for me:****
>>
>> 1. The performance of running same terasort job do not change, even after
>> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
>> in yarn-site.xml and restart the yarn cluster.  ****
>>
>> 2. Even if I set the value of both
>> 'yarn.nodemanager.resource.cpu-cores' and
>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>> complete without any exception, but the expected behavior should be that no
>> cpu could be assigned to the container, and then no job could be executed
>> on the cluster. Right?****
>>
>> Thanks!****
>>
>> ** **
>>
>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Sandy Ryza <sa...@cloudera.com>.
Hi Sam,

LinuxResourceCalculatorPlugin and DominantResourceCalculator control
separate things.  The former is for a NodeManager to calculate the resource
usage of a container process so that it can kill it if it gets too large.
 The latter is used by the Capacity Scheduler to allocate containers, and,
if you're using the Capacity Scheduler, in theory should do what you're
expecting it to do.  Based on the fix version of YARN-2,
DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.

-Sandy


On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:

> Thanks, but seems it does not work for me.
>
> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
> class named
> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
> replaced it with
> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
> configuration is below, and use the default values for other
> LinuxContainerExecutor configurations. In my expectation, if I set the
> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
> completed by nodemanager as the slaves do not have any cpu to use. But, in
> fact, my job could be completed successfully. Why?
>
>  <property>
>     <name>yarn.nodemanager.resource.cpu-cores</name>
>     <value>0</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>     <value>0</value>
>   </property>
>
>  <property>
>
> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>
> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.container-executor.class</name>
>
> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>   </property>
>
>
>
> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>
>>  I think you need to change the following configurations in
>> yarn-site.xml to enable CPU resource limits.****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-executor.class’****
>>
>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>
>> ** **
>>
>> Some LinuxContainerExecutor configurations:****
>>
>> yarn.nodemanager.linux-container-executor.path****
>>
>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>
>> ** **
>>
>> -Chuan****
>>
>> ** **
>>
>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work*
>> ***
>>
>> ** **
>>
>> Hi,****
>>
>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>> work for me:****
>>
>> 1. The performance of running same terasort job do not change, even after
>> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
>> in yarn-site.xml and restart the yarn cluster.  ****
>>
>> 2. Even if I set the value of both
>> 'yarn.nodemanager.resource.cpu-cores' and
>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>> complete without any exception, but the expected behavior should be that no
>> cpu could be assigned to the container, and then no job could be executed
>> on the cluster. Right?****
>>
>> Thanks!****
>>
>> ** **
>>
>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Sandy Ryza <sa...@cloudera.com>.
Hi Sam,

LinuxResourceCalculatorPlugin and DominantResourceCalculator control
separate things.  The former is for a NodeManager to calculate the resource
usage of a container process so that it can kill it if it gets too large.
 The latter is used by the Capacity Scheduler to allocate containers, and,
if you're using the Capacity Scheduler, in theory should do what you're
expecting it to do.  Based on the fix version of YARN-2,
DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.

-Sandy


On Sat, Jul 20, 2013 at 11:04 PM, sam liu <sa...@gmail.com> wrote:

> Thanks, but seems it does not work for me.
>
> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
> class named
> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
> replaced it with
> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
> configuration is below, and use the default values for other
> LinuxContainerExecutor configurations. In my expectation, if I set the
> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
> completed by nodemanager as the slaves do not have any cpu to use. But, in
> fact, my job could be completed successfully. Why?
>
>  <property>
>     <name>yarn.nodemanager.resource.cpu-cores</name>
>     <value>0</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>     <value>0</value>
>   </property>
>
>  <property>
>
> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>
> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.container-executor.class</name>
>
> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>   </property>
>
>
>
> 2013/7/4 Chuan Liu <ch...@microsoft.com>
>
>>  I think you need to change the following configurations in
>> yarn-site.xml to enable CPU resource limits.****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-executor.class’****
>>
>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>
>> ** **
>>
>> Some LinuxContainerExecutor configurations:****
>>
>> yarn.nodemanager.linux-container-executor.path****
>>
>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>
>> ** **
>>
>> -Chuan****
>>
>> ** **
>>
>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work*
>> ***
>>
>> ** **
>>
>> Hi,****
>>
>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>> work for me:****
>>
>> 1. The performance of running same terasort job do not change, even after
>> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
>> in yarn-site.xml and restart the yarn cluster.  ****
>>
>> 2. Even if I set the value of both
>> 'yarn.nodemanager.resource.cpu-cores' and
>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>> complete without any exception, but the expected behavior should be that no
>> cpu could be assigned to the container, and then no job could be executed
>> on the cluster. Right?****
>>
>> Thanks!****
>>
>> ** **
>>
>
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Thanks, but seems it does not work for me.

My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
class named
'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
replaced it with
'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
configuration is below, and use the default values for other
LinuxContainerExecutor configurations. In my expectation, if I set the
value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
completed by nodemanager as the slaves do not have any cpu to use. But, in
fact, my job could be completed successfully. Why?

 <property>
    <name>yarn.nodemanager.resource.cpu-cores</name>
    <value>0</value>
  </property>

 <property>
    <name>yarn.nodemanager.vcores-pcores-ratio</name>
    <value>0</value>
  </property>

 <property>

<name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
    <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
  </property>

 <property>
    <name>yarn.nodemanager.container-executor.class</name>

<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
  </property>



2013/7/4 Chuan Liu <ch...@microsoft.com>

>  I think you need to change the following configurations in yarn-site.xml
> to enable CPU resource limits.****
>
> ** **
>
> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>
> ** **
>
> ‘yarn.nodemanager.container-executor.class’****
>
> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>
> ** **
>
> Some LinuxContainerExecutor configurations:****
>
> yarn.nodemanager.linux-container-executor.path****
>
> yarn.nodemanager.linux-container-executor.resources-handler.class****
>
> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>
> ** **
>
> -Chuan****
>
> ** **
>
> *From:* sam liu [mailto:samliuhadoop@gmail.com]
> *Sent:* Tuesday, July 02, 2013 8:33 PM
> *To:* user@hadoop.apache.org
> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work**
> **
>
> ** **
>
> Hi,****
>
> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work
> for me:****
>
> 1. The performance of running same terasort job do not change, even after
> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
> in yarn-site.xml and restart the yarn cluster.  ****
>
> 2. Even if I set the value of both
> 'yarn.nodemanager.resource.cpu-cores' and
> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
> complete without any exception, but the expected behavior should be that no
> cpu could be assigned to the container, and then no job could be executed
> on the cluster. Right?****
>
> Thanks!****
>
> ** **
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Thanks, but seems it does not work for me.

My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
class named
'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
replaced it with
'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
configuration is below, and use the default values for other
LinuxContainerExecutor configurations. In my expectation, if I set the
value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
completed by nodemanager as the slaves do not have any cpu to use. But, in
fact, my job could be completed successfully. Why?

 <property>
    <name>yarn.nodemanager.resource.cpu-cores</name>
    <value>0</value>
  </property>

 <property>
    <name>yarn.nodemanager.vcores-pcores-ratio</name>
    <value>0</value>
  </property>

 <property>

<name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
    <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
  </property>

 <property>
    <name>yarn.nodemanager.container-executor.class</name>

<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
  </property>



2013/7/4 Chuan Liu <ch...@microsoft.com>

>  I think you need to change the following configurations in yarn-site.xml
> to enable CPU resource limits.****
>
> ** **
>
> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>
> ** **
>
> ‘yarn.nodemanager.container-executor.class’****
>
> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>
> ** **
>
> Some LinuxContainerExecutor configurations:****
>
> yarn.nodemanager.linux-container-executor.path****
>
> yarn.nodemanager.linux-container-executor.resources-handler.class****
>
> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>
> ** **
>
> -Chuan****
>
> ** **
>
> *From:* sam liu [mailto:samliuhadoop@gmail.com]
> *Sent:* Tuesday, July 02, 2013 8:33 PM
> *To:* user@hadoop.apache.org
> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work**
> **
>
> ** **
>
> Hi,****
>
> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work
> for me:****
>
> 1. The performance of running same terasort job do not change, even after
> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
> in yarn-site.xml and restart the yarn cluster.  ****
>
> 2. Even if I set the value of both
> 'yarn.nodemanager.resource.cpu-cores' and
> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
> complete without any exception, but the expected behavior should be that no
> cpu could be assigned to the container, and then no job could be executed
> on the cluster. Right?****
>
> Thanks!****
>
> ** **
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Thanks, but seems it does not work for me.

My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
class named
'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
replaced it with
'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
configuration is below, and use the default values for other
LinuxContainerExecutor configurations. In my expectation, if I set the
value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
completed by nodemanager as the slaves do not have any cpu to use. But, in
fact, my job could be completed successfully. Why?

 <property>
    <name>yarn.nodemanager.resource.cpu-cores</name>
    <value>0</value>
  </property>

 <property>
    <name>yarn.nodemanager.vcores-pcores-ratio</name>
    <value>0</value>
  </property>

 <property>

<name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
    <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
  </property>

 <property>
    <name>yarn.nodemanager.container-executor.class</name>

<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
  </property>



2013/7/4 Chuan Liu <ch...@microsoft.com>

>  I think you need to change the following configurations in yarn-site.xml
> to enable CPU resource limits.****
>
> ** **
>
> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>
> ** **
>
> ‘yarn.nodemanager.container-executor.class’****
>
> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>
> ** **
>
> Some LinuxContainerExecutor configurations:****
>
> yarn.nodemanager.linux-container-executor.path****
>
> yarn.nodemanager.linux-container-executor.resources-handler.class****
>
> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>
> ** **
>
> -Chuan****
>
> ** **
>
> *From:* sam liu [mailto:samliuhadoop@gmail.com]
> *Sent:* Tuesday, July 02, 2013 8:33 PM
> *To:* user@hadoop.apache.org
> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work**
> **
>
> ** **
>
> Hi,****
>
> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work
> for me:****
>
> 1. The performance of running same terasort job do not change, even after
> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
> in yarn-site.xml and restart the yarn cluster.  ****
>
> 2. Even if I set the value of both
> 'yarn.nodemanager.resource.cpu-cores' and
> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
> complete without any exception, but the expected behavior should be that no
> cpu could be assigned to the container, and then no job could be executed
> on the cluster. Right?****
>
> Thanks!****
>
> ** **
>

Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by sam liu <sa...@gmail.com>.
Thanks, but seems it does not work for me.

My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
class named
'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
replaced it with
'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
configuration is below, and use the default values for other
LinuxContainerExecutor configurations. In my expectation, if I set the
value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
completed by nodemanager as the slaves do not have any cpu to use. But, in
fact, my job could be completed successfully. Why?

 <property>
    <name>yarn.nodemanager.resource.cpu-cores</name>
    <value>0</value>
  </property>

 <property>
    <name>yarn.nodemanager.vcores-pcores-ratio</name>
    <value>0</value>
  </property>

 <property>

<name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
    <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
  </property>

 <property>
    <name>yarn.nodemanager.container-executor.class</name>

<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
  </property>



2013/7/4 Chuan Liu <ch...@microsoft.com>

>  I think you need to change the following configurations in yarn-site.xml
> to enable CPU resource limits.****
>
> ** **
>
> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>
> ** **
>
> ‘yarn.nodemanager.container-executor.class’****
>
> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>
> ** **
>
> Some LinuxContainerExecutor configurations:****
>
> yarn.nodemanager.linux-container-executor.path****
>
> yarn.nodemanager.linux-container-executor.resources-handler.class****
>
> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount****
>
> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>
> ** **
>
> -Chuan****
>
> ** **
>
> *From:* sam liu [mailto:samliuhadoop@gmail.com]
> *Sent:* Tuesday, July 02, 2013 8:33 PM
> *To:* user@hadoop.apache.org
> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work**
> **
>
> ** **
>
> Hi,****
>
> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work
> for me:****
>
> 1. The performance of running same terasort job do not change, even after
> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
> in yarn-site.xml and restart the yarn cluster.  ****
>
> 2. Even if I set the value of both
> 'yarn.nodemanager.resource.cpu-cores' and
> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
> complete without any exception, but the expected behavior should be that no
> cpu could be assigned to the container, and then no job could be executed
> on the cluster. Right?****
>
> Thanks!****
>
> ** **
>

RE: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Chuan Liu <ch...@microsoft.com>.
I think you need to change the following configurations in yarn-site.xml to enable CPU resource limits.

'yarn.nodemanager.container-monitor.resource-calculator.class' 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator'

'yarn.nodemanager.container-executor.class'
'org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor'

Some LinuxContainerExecutor configurations:
yarn.nodemanager.linux-container-executor.path
yarn.nodemanager.linux-container-executor.resources-handler.class
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
yarn.nodemanager.linux-container-executor.cgroups.mount
yarn.nodemanager.linux-container-executor.cgroups.mount-path

-Chuan

From: sam liu [mailto:samliuhadoop@gmail.com]
Sent: Tuesday, July 02, 2013 8:33 PM
To: user@hadoop.apache.org
Subject: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Hi,
With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work for me:
1. The performance of running same terasort job do not change, even after increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn cluster.
2. Even if I set the value of both
'yarn.nodemanager.resource.cpu-cores' and 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could complete without any exception, but the expected behavior should be that no cpu could be assigned to the container, and then no job could be executed on the cluster. Right?
Thanks!


RE: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Chuan Liu <ch...@microsoft.com>.
I think you need to change the following configurations in yarn-site.xml to enable CPU resource limits.

'yarn.nodemanager.container-monitor.resource-calculator.class' 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator'

'yarn.nodemanager.container-executor.class'
'org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor'

Some LinuxContainerExecutor configurations:
yarn.nodemanager.linux-container-executor.path
yarn.nodemanager.linux-container-executor.resources-handler.class
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
yarn.nodemanager.linux-container-executor.cgroups.mount
yarn.nodemanager.linux-container-executor.cgroups.mount-path

-Chuan

From: sam liu [mailto:samliuhadoop@gmail.com]
Sent: Tuesday, July 02, 2013 8:33 PM
To: user@hadoop.apache.org
Subject: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Hi,
With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work for me:
1. The performance of running same terasort job do not change, even after increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn cluster.
2. Even if I set the value of both
'yarn.nodemanager.resource.cpu-cores' and 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could complete without any exception, but the expected behavior should be that no cpu could be assigned to the container, and then no job could be executed on the cluster. Right?
Thanks!


RE: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Chuan Liu <ch...@microsoft.com>.
I think you need to change the following configurations in yarn-site.xml to enable CPU resource limits.

'yarn.nodemanager.container-monitor.resource-calculator.class' 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator'

'yarn.nodemanager.container-executor.class'
'org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor'

Some LinuxContainerExecutor configurations:
yarn.nodemanager.linux-container-executor.path
yarn.nodemanager.linux-container-executor.resources-handler.class
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
yarn.nodemanager.linux-container-executor.cgroups.mount
yarn.nodemanager.linux-container-executor.cgroups.mount-path

-Chuan

From: sam liu [mailto:samliuhadoop@gmail.com]
Sent: Tuesday, July 02, 2013 8:33 PM
To: user@hadoop.apache.org
Subject: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Hi,
With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work for me:
1. The performance of running same terasort job do not change, even after increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn cluster.
2. Even if I set the value of both
'yarn.nodemanager.resource.cpu-cores' and 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could complete without any exception, but the expected behavior should be that no cpu could be assigned to the container, and then no job could be executed on the cluster. Right?
Thanks!


RE: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Posted by Chuan Liu <ch...@microsoft.com>.
I think you need to change the following configurations in yarn-site.xml to enable CPU resource limits.

'yarn.nodemanager.container-monitor.resource-calculator.class' 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator'

'yarn.nodemanager.container-executor.class'
'org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor'

Some LinuxContainerExecutor configurations:
yarn.nodemanager.linux-container-executor.path
yarn.nodemanager.linux-container-executor.resources-handler.class
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
yarn.nodemanager.linux-container-executor.cgroups.mount
yarn.nodemanager.linux-container-executor.cgroups.mount-path

-Chuan

From: sam liu [mailto:samliuhadoop@gmail.com]
Sent: Tuesday, July 02, 2013 8:33 PM
To: user@hadoop.apache.org
Subject: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

Hi,
With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not work for me:
1. The performance of running same terasort job do not change, even after increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores' in yarn-site.xml and restart the yarn cluster.
2. Even if I set the value of both
'yarn.nodemanager.resource.cpu-cores' and 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could complete without any exception, but the expected behavior should be that no cpu could be assigned to the container, and then no job could be executed on the cluster. Right?
Thanks!