You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by 임정택 <ka...@gmail.com> on 2015/01/28 08:47:26 UTC
Question about YARN Memory allocation
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change
configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
-Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
-Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in
real it's 60 containers. (59 maps ran concurrently, maybe 1 is
ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
Re: Question about YARN Memory allocation
Posted by Ted Yu <yu...@gmail.com>.
LCE refers to Linux Container Executor
Please take a look at yarn-default.xml
Cheers
> On Jan 28, 2015, at 12:49 AM, 임정택 <ka...@gmail.com> wrote:
>
> Hi!
>
> At first, it was my mistake. :( All memory is "in use".
> Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
> I don't understand why container needs 2048m (2G) of memory to run.
>
> Maybe I have to learn YARN schedulers and relevant configurations.
> I'm newbie of YARN, and learning it by reading some docs. :)
>
> Btw, what's LCE and DRC?
>
> Thanks again for helping.
>
> Regards.
> Jungtaek Lim (HeartSaVioR)
>
>
> 2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>:
>> Hi Jungtaek Lim,
>> Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
>> So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
>> I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
>> Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
>>
>> Regards,
>> Naga
>> From: 임정택 [kabhwan@gmail.com]
>> Sent: Wednesday, January 28, 2015 13:23
>> To: user@hadoop.apache.org
>> Subject: Re: Question about YARN Memory allocation
>>
>> Forgot to add one thing, all memory (120G) is reserved now.
>>
>> Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
>> 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
>> Furthermore, 10 more VCores are reserved. I don't know what is it.
>>
>>
>> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>>> Hello all!
>>>
>>> I'm new to YARN, so it could be beginner question.
>>> (I've been used MRv1 and changed just now.)
>>>
>>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>>> In order to migrate MRv1 to YARN, I read several docs, and change configrations.
>>>
>>> ```
>>> yarn.nodemanager.resource.memory-mb: 12288
>>> yarn.scheduler.minimum-allocation-mb: 512
>>> mapreduce.map.memory.mb: 1536
>>> mapreduce.reduce.memory.mb: 1536
>>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> ```
>>>
>>> I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
>>>
>>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
>>> But it's better to make clear, to understand YARN clearer.
>>>
>>> Any helps & explanations are really appreciated.
>>> Thanks!
>>>
>>> Best regards.
>>> Jungtaek Lim (HeartSaVioR)
>>
>>
>>
>> --
>> Name : 임 정택
>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>> Twitter : http://twitter.com/heartsavior
>> LinkedIn : http://www.linkedin.com/in/heartsavior
>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by Ted Yu <yu...@gmail.com>.
LCE refers to Linux Container Executor
Please take a look at yarn-default.xml
Cheers
> On Jan 28, 2015, at 12:49 AM, 임정택 <ka...@gmail.com> wrote:
>
> Hi!
>
> At first, it was my mistake. :( All memory is "in use".
> Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
> I don't understand why container needs 2048m (2G) of memory to run.
>
> Maybe I have to learn YARN schedulers and relevant configurations.
> I'm newbie of YARN, and learning it by reading some docs. :)
>
> Btw, what's LCE and DRC?
>
> Thanks again for helping.
>
> Regards.
> Jungtaek Lim (HeartSaVioR)
>
>
> 2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>:
>> Hi Jungtaek Lim,
>> Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
>> So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
>> I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
>> Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
>>
>> Regards,
>> Naga
>> From: 임정택 [kabhwan@gmail.com]
>> Sent: Wednesday, January 28, 2015 13:23
>> To: user@hadoop.apache.org
>> Subject: Re: Question about YARN Memory allocation
>>
>> Forgot to add one thing, all memory (120G) is reserved now.
>>
>> Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
>> 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
>> Furthermore, 10 more VCores are reserved. I don't know what is it.
>>
>>
>> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>>> Hello all!
>>>
>>> I'm new to YARN, so it could be beginner question.
>>> (I've been used MRv1 and changed just now.)
>>>
>>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>>> In order to migrate MRv1 to YARN, I read several docs, and change configrations.
>>>
>>> ```
>>> yarn.nodemanager.resource.memory-mb: 12288
>>> yarn.scheduler.minimum-allocation-mb: 512
>>> mapreduce.map.memory.mb: 1536
>>> mapreduce.reduce.memory.mb: 1536
>>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> ```
>>>
>>> I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
>>>
>>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
>>> But it's better to make clear, to understand YARN clearer.
>>>
>>> Any helps & explanations are really appreciated.
>>> Thanks!
>>>
>>> Best regards.
>>> Jungtaek Lim (HeartSaVioR)
>>
>>
>>
>> --
>> Name : 임 정택
>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>> Twitter : http://twitter.com/heartsavior
>> LinkedIn : http://www.linkedin.com/in/heartsavior
>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by Ted Yu <yu...@gmail.com>.
LCE refers to Linux Container Executor
Please take a look at yarn-default.xml
Cheers
> On Jan 28, 2015, at 12:49 AM, 임정택 <ka...@gmail.com> wrote:
>
> Hi!
>
> At first, it was my mistake. :( All memory is "in use".
> Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
> I don't understand why container needs 2048m (2G) of memory to run.
>
> Maybe I have to learn YARN schedulers and relevant configurations.
> I'm newbie of YARN, and learning it by reading some docs. :)
>
> Btw, what's LCE and DRC?
>
> Thanks again for helping.
>
> Regards.
> Jungtaek Lim (HeartSaVioR)
>
>
> 2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>:
>> Hi Jungtaek Lim,
>> Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
>> So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
>> I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
>> Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
>>
>> Regards,
>> Naga
>> From: 임정택 [kabhwan@gmail.com]
>> Sent: Wednesday, January 28, 2015 13:23
>> To: user@hadoop.apache.org
>> Subject: Re: Question about YARN Memory allocation
>>
>> Forgot to add one thing, all memory (120G) is reserved now.
>>
>> Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
>> 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
>> Furthermore, 10 more VCores are reserved. I don't know what is it.
>>
>>
>> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>>> Hello all!
>>>
>>> I'm new to YARN, so it could be beginner question.
>>> (I've been used MRv1 and changed just now.)
>>>
>>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>>> In order to migrate MRv1 to YARN, I read several docs, and change configrations.
>>>
>>> ```
>>> yarn.nodemanager.resource.memory-mb: 12288
>>> yarn.scheduler.minimum-allocation-mb: 512
>>> mapreduce.map.memory.mb: 1536
>>> mapreduce.reduce.memory.mb: 1536
>>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> ```
>>>
>>> I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
>>>
>>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
>>> But it's better to make clear, to understand YARN clearer.
>>>
>>> Any helps & explanations are really appreciated.
>>> Thanks!
>>>
>>> Best regards.
>>> Jungtaek Lim (HeartSaVioR)
>>
>>
>>
>> --
>> Name : 임 정택
>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>> Twitter : http://twitter.com/heartsavior
>> LinkedIn : http://www.linkedin.com/in/heartsavior
>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi
When you click on active nodes in the Web UI you will be able to see all the nodes and the reserved resources status also.
Can you check and tell whether any node is actually under utilized (like resources are there but still no containers are assigned) . If its the case then the solution is there in YARN-1769 (assuming u have configured capacity scheduler)
Refer "http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" for more information on capacity schedulers...
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 14:19
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>>:
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com<ma...@gmail.com>]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi
When you click on active nodes in the Web UI you will be able to see all the nodes and the reserved resources status also.
Can you check and tell whether any node is actually under utilized (like resources are there but still no containers are assigned) . If its the case then the solution is there in YARN-1769 (assuming u have configured capacity scheduler)
Refer "http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" for more information on capacity schedulers...
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 14:19
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>>:
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com<ma...@gmail.com>]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi
When you click on active nodes in the Web UI you will be able to see all the nodes and the reserved resources status also.
Can you check and tell whether any node is actually under utilized (like resources are there but still no containers are assigned) . If its the case then the solution is there in YARN-1769 (assuming u have configured capacity scheduler)
Refer "http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" for more information on capacity schedulers...
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 14:19
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>>:
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com<ma...@gmail.com>]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by Ted Yu <yu...@gmail.com>.
LCE refers to Linux Container Executor
Please take a look at yarn-default.xml
Cheers
> On Jan 28, 2015, at 12:49 AM, 임정택 <ka...@gmail.com> wrote:
>
> Hi!
>
> At first, it was my mistake. :( All memory is "in use".
> Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
> I don't understand why container needs 2048m (2G) of memory to run.
>
> Maybe I have to learn YARN schedulers and relevant configurations.
> I'm newbie of YARN, and learning it by reading some docs. :)
>
> Btw, what's LCE and DRC?
>
> Thanks again for helping.
>
> Regards.
> Jungtaek Lim (HeartSaVioR)
>
>
> 2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>:
>> Hi Jungtaek Lim,
>> Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
>> So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
>> I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
>> Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
>>
>> Regards,
>> Naga
>> From: 임정택 [kabhwan@gmail.com]
>> Sent: Wednesday, January 28, 2015 13:23
>> To: user@hadoop.apache.org
>> Subject: Re: Question about YARN Memory allocation
>>
>> Forgot to add one thing, all memory (120G) is reserved now.
>>
>> Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
>> 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
>> Furthermore, 10 more VCores are reserved. I don't know what is it.
>>
>>
>> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>>> Hello all!
>>>
>>> I'm new to YARN, so it could be beginner question.
>>> (I've been used MRv1 and changed just now.)
>>>
>>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>>> In order to migrate MRv1 to YARN, I read several docs, and change configrations.
>>>
>>> ```
>>> yarn.nodemanager.resource.memory-mb: 12288
>>> yarn.scheduler.minimum-allocation-mb: 512
>>> mapreduce.map.memory.mb: 1536
>>> mapreduce.reduce.memory.mb: 1536
>>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>> ```
>>>
>>> I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
>>>
>>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
>>> But it's better to make clear, to understand YARN clearer.
>>>
>>> Any helps & explanations are really appreciated.
>>> Thanks!
>>>
>>> Best regards.
>>> Jungtaek Lim (HeartSaVioR)
>>
>>
>>
>> --
>> Name : 임 정택
>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>> Twitter : http://twitter.com/heartsavior
>> LinkedIn : http://www.linkedin.com/in/heartsavior
>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi
When you click on active nodes in the Web UI you will be able to see all the nodes and the reserved resources status also.
Can you check and tell whether any node is actually under utilized (like resources are there but still no containers are assigned) . If its the case then the solution is there in YARN-1769 (assuming u have configured capacity scheduler)
Refer "http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" for more information on capacity schedulers...
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 14:19
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <ga...@huawei.com>>:
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com<ma...@gmail.com>]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048
/ TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <
garlanaganarasimha@huawei.com>:
> Hi Jungtaek Lim,
> Earlier we faced similar problem of reservation with Capacity scheduler
> and its actually solved with YARN-1769 (part of 2.6 hadoop)
> So hope it might help you if you have configured Capacity scheduler. also
> check whether "yarn.scheduler.capacity.node-locality-delay" is configured
> (might not be direct help but might reduce probability of reservation ).
> I have one doubt with info : In the image it seems to be 20GB and 10
> vcores reserved but you seem to say all are reserved ?
> Is LCE & DRC also configured ? if so what are the vcores configured for NM
> and the app's containers?
>
> Regards,
> Naga
> ------------------------------
> *From:* 임정택 [kabhwan@gmail.com]
> *Sent:* Wednesday, January 28, 2015 13:23
> *To:* user@hadoop.apache.org
> *Subject:* Re: Question about YARN Memory allocation
>
> Forgot to add one thing, all memory (120G) is reserved now.
>
> Apps Submitted Apps Pending Apps Running Apps Completed Containers
> Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
> Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted
> Nodes 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
> Furthermore, 10 more VCores are reserved. I don't know what is it.
>
>
> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>
>> Hello all!
>>
>> I'm new to YARN, so it could be beginner question.
>> (I've been used MRv1 and changed just now.)
>>
>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>> In order to migrate MRv1 to YARN, I read several docs, and change
>> configrations.
>>
>> ```
>> yarn.nodemanager.resource.memory-mb: 12288
>> yarn.scheduler.minimum-allocation-mb: 512
>> mapreduce.map.memory.mb: 1536
>> mapreduce.reduce.memory.mb: 1536
>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> ```
>>
>> I'm expecting that it will be 80 containers running concurrently, but
>> in real it's 60 containers. (59 maps ran concurrently, maybe 1 is
>> ApplicationManager.)
>>
>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
>> suspecting it.
>> But it's better to make clear, to understand YARN clearer.
>>
>> Any helps & explanations are really appreciated.
>> Thanks!
>>
>> Best regards.
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048
/ TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <
garlanaganarasimha@huawei.com>:
> Hi Jungtaek Lim,
> Earlier we faced similar problem of reservation with Capacity scheduler
> and its actually solved with YARN-1769 (part of 2.6 hadoop)
> So hope it might help you if you have configured Capacity scheduler. also
> check whether "yarn.scheduler.capacity.node-locality-delay" is configured
> (might not be direct help but might reduce probability of reservation ).
> I have one doubt with info : In the image it seems to be 20GB and 10
> vcores reserved but you seem to say all are reserved ?
> Is LCE & DRC also configured ? if so what are the vcores configured for NM
> and the app's containers?
>
> Regards,
> Naga
> ------------------------------
> *From:* 임정택 [kabhwan@gmail.com]
> *Sent:* Wednesday, January 28, 2015 13:23
> *To:* user@hadoop.apache.org
> *Subject:* Re: Question about YARN Memory allocation
>
> Forgot to add one thing, all memory (120G) is reserved now.
>
> Apps Submitted Apps Pending Apps Running Apps Completed Containers
> Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
> Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted
> Nodes 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
> Furthermore, 10 more VCores are reserved. I don't know what is it.
>
>
> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>
>> Hello all!
>>
>> I'm new to YARN, so it could be beginner question.
>> (I've been used MRv1 and changed just now.)
>>
>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>> In order to migrate MRv1 to YARN, I read several docs, and change
>> configrations.
>>
>> ```
>> yarn.nodemanager.resource.memory-mb: 12288
>> yarn.scheduler.minimum-allocation-mb: 512
>> mapreduce.map.memory.mb: 1536
>> mapreduce.reduce.memory.mb: 1536
>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> ```
>>
>> I'm expecting that it will be 80 containers running concurrently, but
>> in real it's 60 containers. (59 maps ran concurrently, maybe 1 is
>> ApplicationManager.)
>>
>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
>> suspecting it.
>> But it's better to make clear, to understand YARN clearer.
>>
>> Any helps & explanations are really appreciated.
>> Thanks!
>>
>> Best regards.
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048
/ TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <
garlanaganarasimha@huawei.com>:
> Hi Jungtaek Lim,
> Earlier we faced similar problem of reservation with Capacity scheduler
> and its actually solved with YARN-1769 (part of 2.6 hadoop)
> So hope it might help you if you have configured Capacity scheduler. also
> check whether "yarn.scheduler.capacity.node-locality-delay" is configured
> (might not be direct help but might reduce probability of reservation ).
> I have one doubt with info : In the image it seems to be 20GB and 10
> vcores reserved but you seem to say all are reserved ?
> Is LCE & DRC also configured ? if so what are the vcores configured for NM
> and the app's containers?
>
> Regards,
> Naga
> ------------------------------
> *From:* 임정택 [kabhwan@gmail.com]
> *Sent:* Wednesday, January 28, 2015 13:23
> *To:* user@hadoop.apache.org
> *Subject:* Re: Question about YARN Memory allocation
>
> Forgot to add one thing, all memory (120G) is reserved now.
>
> Apps Submitted Apps Pending Apps Running Apps Completed Containers
> Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
> Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted
> Nodes 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
> Furthermore, 10 more VCores are reserved. I don't know what is it.
>
>
> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>
>> Hello all!
>>
>> I'm new to YARN, so it could be beginner question.
>> (I've been used MRv1 and changed just now.)
>>
>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>> In order to migrate MRv1 to YARN, I read several docs, and change
>> configrations.
>>
>> ```
>> yarn.nodemanager.resource.memory-mb: 12288
>> yarn.scheduler.minimum-allocation-mb: 512
>> mapreduce.map.memory.mb: 1536
>> mapreduce.reduce.memory.mb: 1536
>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> ```
>>
>> I'm expecting that it will be 80 containers running concurrently, but
>> in real it's 60 containers. (59 maps ran concurrently, maybe 1 is
>> ApplicationManager.)
>>
>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
>> suspecting it.
>> But it's better to make clear, to understand YARN clearer.
>>
>> Any helps & explanations are really appreciated.
>> Thanks!
>>
>> Best regards.
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Hi!
At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048
/ TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.
Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)
Btw, what's LCE and DRC?
Thanks again for helping.
Regards.
Jungtaek Lim (HeartSaVioR)
2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <
garlanaganarasimha@huawei.com>:
> Hi Jungtaek Lim,
> Earlier we faced similar problem of reservation with Capacity scheduler
> and its actually solved with YARN-1769 (part of 2.6 hadoop)
> So hope it might help you if you have configured Capacity scheduler. also
> check whether "yarn.scheduler.capacity.node-locality-delay" is configured
> (might not be direct help but might reduce probability of reservation ).
> I have one doubt with info : In the image it seems to be 20GB and 10
> vcores reserved but you seem to say all are reserved ?
> Is LCE & DRC also configured ? if so what are the vcores configured for NM
> and the app's containers?
>
> Regards,
> Naga
> ------------------------------
> *From:* 임정택 [kabhwan@gmail.com]
> *Sent:* Wednesday, January 28, 2015 13:23
> *To:* user@hadoop.apache.org
> *Subject:* Re: Question about YARN Memory allocation
>
> Forgot to add one thing, all memory (120G) is reserved now.
>
> Apps Submitted Apps Pending Apps Running Apps Completed Containers
> Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
> Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted
> Nodes 2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
> Furthermore, 10 more VCores are reserved. I don't know what is it.
>
>
> 2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
>
>> Hello all!
>>
>> I'm new to YARN, so it could be beginner question.
>> (I've been used MRv1 and changed just now.)
>>
>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>> In order to migrate MRv1 to YARN, I read several docs, and change
>> configrations.
>>
>> ```
>> yarn.nodemanager.resource.memory-mb: 12288
>> yarn.scheduler.minimum-allocation-mb: 512
>> mapreduce.map.memory.mb: 1536
>> mapreduce.reduce.memory.mb: 1536
>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> ```
>>
>> I'm expecting that it will be 80 containers running concurrently, but
>> in real it's 60 containers. (59 maps ran concurrently, maybe 1 is
>> ApplicationManager.)
>>
>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
>> suspecting it.
>> But it's better to make clear, to understand YARN clearer.
>>
>> Any helps & explanations are really appreciated.
>> Thanks!
>>
>> Best regards.
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
RE: Question about YARN Memory allocation
Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Jungtaek Lim,
Earlier we faced similar problem of reservation with Capacity scheduler and its actually solved with YARN-1769 (part of 2.6 hadoop)
So hope it might help you if you have configured Capacity scheduler. also check whether "yarn.scheduler.capacity.node-locality-delay" is configured (might not be direct help but might reduce probability of reservation ).
I have one doubt with info : In the image it seems to be 20GB and 10 vcores reserved but you seem to say all are reserved ?
Is LCE & DRC also configured ? if so what are the vcores configured for NM and the app's containers?
Regards,
Naga
________________________________
From: 임정택 [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 13:23
To: user@hadoop.apache.org
Subject: Re: Question about YARN Memory allocation
Forgot to add one thing, all memory (120G) is reserved now.
Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>>:
Hello all!
I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)
I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
In order to migrate MRv1 to YARN, I read several docs, and change configrations.
```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8 -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
```
I'm expecting that it will be 80 containers running concurrently, but in real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationManager.)
All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm suspecting it.
But it's better to make clear, to understand YARN clearer.
Any helps & explanations are really appreciated.
Thanks!
Best regards.
Jungtaek Lim (HeartSaVioR)
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Forgot to add one thing, all memory (120G) is reserved now.
Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted Nodes211060120 GB120
GB20 GB608010100000
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
> Hello all!
>
> I'm new to YARN, so it could be beginner question.
> (I've been used MRv1 and changed just now.)
>
> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
> In order to migrate MRv1 to YARN, I read several docs, and change
> configrations.
>
> ```
> yarn.nodemanager.resource.memory-mb: 12288
> yarn.scheduler.minimum-allocation-mb: 512
> mapreduce.map.memory.mb: 1536
> mapreduce.reduce.memory.mb: 1536
> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> ```
>
> I'm expecting that it will be 80 containers running concurrently, but in
> real it's 60 containers. (59 maps ran concurrently, maybe 1 is
> ApplicationManager.)
>
> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
> suspecting it.
> But it's better to make clear, to understand YARN clearer.
>
> Any helps & explanations are really appreciated.
> Thanks!
>
> Best regards.
> Jungtaek Lim (HeartSaVioR)
>
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Forgot to add one thing, all memory (120G) is reserved now.
Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted Nodes211060120 GB120
GB20 GB608010100000
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
> Hello all!
>
> I'm new to YARN, so it could be beginner question.
> (I've been used MRv1 and changed just now.)
>
> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
> In order to migrate MRv1 to YARN, I read several docs, and change
> configrations.
>
> ```
> yarn.nodemanager.resource.memory-mb: 12288
> yarn.scheduler.minimum-allocation-mb: 512
> mapreduce.map.memory.mb: 1536
> mapreduce.reduce.memory.mb: 1536
> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> ```
>
> I'm expecting that it will be 80 containers running concurrently, but in
> real it's 60 containers. (59 maps ran concurrently, maybe 1 is
> ApplicationManager.)
>
> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
> suspecting it.
> But it's better to make clear, to understand YARN clearer.
>
> Any helps & explanations are really appreciated.
> Thanks!
>
> Best regards.
> Jungtaek Lim (HeartSaVioR)
>
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Forgot to add one thing, all memory (120G) is reserved now.
Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted Nodes211060120 GB120
GB20 GB608010100000
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
> Hello all!
>
> I'm new to YARN, so it could be beginner question.
> (I've been used MRv1 and changed just now.)
>
> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
> In order to migrate MRv1 to YARN, I read several docs, and change
> configrations.
>
> ```
> yarn.nodemanager.resource.memory-mb: 12288
> yarn.scheduler.minimum-allocation-mb: 512
> mapreduce.map.memory.mb: 1536
> mapreduce.reduce.memory.mb: 1536
> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> ```
>
> I'm expecting that it will be 80 containers running concurrently, but in
> real it's 60 containers. (59 maps ran concurrently, maybe 1 is
> ApplicationManager.)
>
> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
> suspecting it.
> But it's better to make clear, to understand YARN clearer.
>
> Any helps & explanations are really appreciated.
> Thanks!
>
> Best regards.
> Jungtaek Lim (HeartSaVioR)
>
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior
Re: Question about YARN Memory allocation
Posted by 임정택 <ka...@gmail.com>.
Forgot to add one thing, all memory (120G) is reserved now.
Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted Nodes211060120 GB120
GB20 GB608010100000
Furthermore, 10 more VCores are reserved. I don't know what is it.
2015-01-28 16:47 GMT+09:00 임정택 <ka...@gmail.com>:
> Hello all!
>
> I'm new to YARN, so it could be beginner question.
> (I've been used MRv1 and changed just now.)
>
> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
> In order to migrate MRv1 to YARN, I read several docs, and change
> configrations.
>
> ```
> yarn.nodemanager.resource.memory-mb: 12288
> yarn.scheduler.minimum-allocation-mb: 512
> mapreduce.map.memory.mb: 1536
> mapreduce.reduce.memory.mb: 1536
> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> ```
>
> I'm expecting that it will be 80 containers running concurrently, but in
> real it's 60 containers. (59 maps ran concurrently, maybe 1 is
> ApplicationManager.)
>
> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
> suspecting it.
> But it's better to make clear, to understand YARN clearer.
>
> Any helps & explanations are really appreciated.
> Thanks!
>
> Best regards.
> Jungtaek Lim (HeartSaVioR)
>
>
--
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior