You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "skrho (JIRA)" <ji...@apache.org> on 2015/06/02 10:04:55 UTC

[jira] [Updated] (YARN-3758) The mininum memory setting(yarn.scheduler.minimum-allocation-mb) is not working in container

     [ https://issues.apache.org/jira/browse/YARN-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

skrho updated YARN-3758:
------------------------
    Description: 
Hello there~~

I have 2 clusters


First cluster is 5 node , default 1 application queue, 8G Physical memory each node
Second cluster is 10 node, 2 application queuey, 230G Physical memory each node

Wherever a mapreduce job is running, I want resourcemanager is to set the minimum memory  256m to container

So I was changing configuration in yarn-site.xml & mapred-site.xml

yarn.scheduler.minimum-allocation-mb : 256
mapreduce.map.java.opts : -Xms256m 
mapreduce.reduce.java.opts : -Xms256m 
mapreduce.map.memory.mb : 256 
mapreduce.reduce.memory.mb : 256 


In First cluster  whenever a mapreduce job is running , I can see used memory 256m in web console( http://installedIP:8088/cluster/nodes )
But In Second cluster whenever a mapreduce job is running , I can see used memory 1024m in web console( http://installedIP:8088/cluster/nodes ) 

I know default memory value is 1024m, so if there is not changing memory setting, the default value is working.

I have been testing for two weeks, but I don't know why mimimum memory setting is not working in second cluster

Why this difference is happened? 

Am I wrong setting configuration?
or Is there bug?

Thank you for reading~~

  was:
Hello there~~

I have 2 clusters


First cluster is 5 node , default 1 application queue, 8G Physical memory each node
Second cluster is 10 node, 2 application queuey, 230G Physical memory each node

Wherever a mapreduce job is running, I want resourcemanager is to set the minimum memory  256m to container

So I was changing configuration in yarn-site.xml

yarn.scheduler.minimum-allocation-mb : 256
mapreduce.map.java.opts : -Xms256m 
mapreduce.reduce.java.opts : -Xms256m 
mapreduce.map.memory.mb : 256 
mapreduce.reduce.memory.mb : 256 


In First cluster  whenever a mapreduce job is running , I can see used memory 256m in web console( http://installedIP:8088/cluster/nodes )
But In Second cluster whenever a mapreduce job is running , I can see used memory 1024m in web console( http://installedIP:8088/cluster/nodes ) 

I know default memory value is 1024m, so if there is not changing memory setting, the default value is working.

I have been testing for two weeks, but I don't know why mimimum memory setting is not working in second cluster

Why this difference is happened? 

Am I wrong setting configuration?
or Is there bug?

Thank you for reading~~


> The mininum memory setting(yarn.scheduler.minimum-allocation-mb) is not working in container
> --------------------------------------------------------------------------------------------
>
>                 Key: YARN-3758
>                 URL: https://issues.apache.org/jira/browse/YARN-3758
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager
>    Affects Versions: 2.4.0
>            Reporter: skrho
>
> Hello there~~
> I have 2 clusters
> First cluster is 5 node , default 1 application queue, 8G Physical memory each node
> Second cluster is 10 node, 2 application queuey, 230G Physical memory each node
> Wherever a mapreduce job is running, I want resourcemanager is to set the minimum memory  256m to container
> So I was changing configuration in yarn-site.xml & mapred-site.xml
> yarn.scheduler.minimum-allocation-mb : 256
> mapreduce.map.java.opts : -Xms256m 
> mapreduce.reduce.java.opts : -Xms256m 
> mapreduce.map.memory.mb : 256 
> mapreduce.reduce.memory.mb : 256 
> In First cluster  whenever a mapreduce job is running , I can see used memory 256m in web console( http://installedIP:8088/cluster/nodes )
> But In Second cluster whenever a mapreduce job is running , I can see used memory 1024m in web console( http://installedIP:8088/cluster/nodes ) 
> I know default memory value is 1024m, so if there is not changing memory setting, the default value is working.
> I have been testing for two weeks, but I don't know why mimimum memory setting is not working in second cluster
> Why this difference is happened? 
> Am I wrong setting configuration?
> or Is there bug?
> Thank you for reading~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)