You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by yaoxiaohua <ya...@outlook.com> on 2016/01/14 06:53:18 UTC
hadoop configure issues
Hi guys,
We use huge pages for linux,
the total huge page memory is 16G.
Our environment is
128G memory,
28 disks,
32(logical ) cpu
Ibm jdk 1.7
Cdh2.3
Linux : overcommit 0
For one nodemanager, we give 100g total, and vcore :24.
So I find that one nodemanager can assign 24 container at
the same time.
And every container 's java opts is :
-server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
-Xgcpolicy:gencon -Xjit:optLevel=hot
-Xlp in ibm jdk is meaning use huge pages.
My questions is that, when the cluster is busy,
I found 24 containing is launched at same time, but we
just have 16G huge pages totoal,
Why does this happened? 24 * 1.2g > 16G
Thanks
Best Regards,
Evan
Re: hadoop configure issues
Posted by Drake민영근 <dr...@nexr.com>.
Hi Evan,
I think this is why: 24 * 1.2g < 100g. I don't know the "huge pages" of the
IBM JDK, but still you may config 16g in nodemanager.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Thu, Jan 14, 2016 at 2:53 PM, yaoxiaohua <ya...@outlook.com> wrote:
> Hi guys,
>
> We use huge pages for linux,
>
> the total huge page memory is 16G.
>
> Our environment is
>
> 128G memory,
>
> 28 disks,
>
> 32(logical ) cpu
>
>
>
> Ibm jdk 1.7
>
> Cdh2.3
>
> Linux : overcommit 0
>
>
>
> For one nodemanager, we give 100g total, and vcore :24.
>
> So I find that one nodemanager can assign 24 container at
> the same time.
>
> And every container ‘s java opts is :
>
> -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
> -Xgcpolicy:gencon -Xjit:optLevel=hot
>
> -Xlp in ibm jdk is meaning use huge pages.
>
>
>
> My questions is that, when the cluster is busy,
>
> I found 24 containing is launched at same time, but we
> just have 16G huge pages totoal,
>
> Why does this happened? 24 * 1.2g > 16G
>
>
>
> Thanks
>
>
>
> Best Regards,
>
> Evan
>
>
>
Re: hadoop configure issues
Posted by Drake민영근 <dr...@nexr.com>.
Hi Evan,
I think this is why: 24 * 1.2g < 100g. I don't know the "huge pages" of the
IBM JDK, but still you may config 16g in nodemanager.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Thu, Jan 14, 2016 at 2:53 PM, yaoxiaohua <ya...@outlook.com> wrote:
> Hi guys,
>
> We use huge pages for linux,
>
> the total huge page memory is 16G.
>
> Our environment is
>
> 128G memory,
>
> 28 disks,
>
> 32(logical ) cpu
>
>
>
> Ibm jdk 1.7
>
> Cdh2.3
>
> Linux : overcommit 0
>
>
>
> For one nodemanager, we give 100g total, and vcore :24.
>
> So I find that one nodemanager can assign 24 container at
> the same time.
>
> And every container ‘s java opts is :
>
> -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
> -Xgcpolicy:gencon -Xjit:optLevel=hot
>
> -Xlp in ibm jdk is meaning use huge pages.
>
>
>
> My questions is that, when the cluster is busy,
>
> I found 24 containing is launched at same time, but we
> just have 16G huge pages totoal,
>
> Why does this happened? 24 * 1.2g > 16G
>
>
>
> Thanks
>
>
>
> Best Regards,
>
> Evan
>
>
>
Re: hadoop configure issues
Posted by Drake민영근 <dr...@nexr.com>.
Hi Evan,
I think this is why: 24 * 1.2g < 100g. I don't know the "huge pages" of the
IBM JDK, but still you may config 16g in nodemanager.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Thu, Jan 14, 2016 at 2:53 PM, yaoxiaohua <ya...@outlook.com> wrote:
> Hi guys,
>
> We use huge pages for linux,
>
> the total huge page memory is 16G.
>
> Our environment is
>
> 128G memory,
>
> 28 disks,
>
> 32(logical ) cpu
>
>
>
> Ibm jdk 1.7
>
> Cdh2.3
>
> Linux : overcommit 0
>
>
>
> For one nodemanager, we give 100g total, and vcore :24.
>
> So I find that one nodemanager can assign 24 container at
> the same time.
>
> And every container ‘s java opts is :
>
> -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
> -Xgcpolicy:gencon -Xjit:optLevel=hot
>
> -Xlp in ibm jdk is meaning use huge pages.
>
>
>
> My questions is that, when the cluster is busy,
>
> I found 24 containing is launched at same time, but we
> just have 16G huge pages totoal,
>
> Why does this happened? 24 * 1.2g > 16G
>
>
>
> Thanks
>
>
>
> Best Regards,
>
> Evan
>
>
>
Re: hadoop configure issues
Posted by Drake민영근 <dr...@nexr.com>.
Hi Evan,
I think this is why: 24 * 1.2g < 100g. I don't know the "huge pages" of the
IBM JDK, but still you may config 16g in nodemanager.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Thu, Jan 14, 2016 at 2:53 PM, yaoxiaohua <ya...@outlook.com> wrote:
> Hi guys,
>
> We use huge pages for linux,
>
> the total huge page memory is 16G.
>
> Our environment is
>
> 128G memory,
>
> 28 disks,
>
> 32(logical ) cpu
>
>
>
> Ibm jdk 1.7
>
> Cdh2.3
>
> Linux : overcommit 0
>
>
>
> For one nodemanager, we give 100g total, and vcore :24.
>
> So I find that one nodemanager can assign 24 container at
> the same time.
>
> And every container ‘s java opts is :
>
> -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
> -Xgcpolicy:gencon -Xjit:optLevel=hot
>
> -Xlp in ibm jdk is meaning use huge pages.
>
>
>
> My questions is that, when the cluster is busy,
>
> I found 24 containing is launched at same time, but we
> just have 16G huge pages totoal,
>
> Why does this happened? 24 * 1.2g > 16G
>
>
>
> Thanks
>
>
>
> Best Regards,
>
> Evan
>
>
>