You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@trafficserver.apache.org by "Adam W. Dace" <co...@gmail.com> on 2015/06/27 14:43:02 UTC

Quick Question About Linux HugePages

First, let me say a big thank you to those who have worked on the
reclaimable freelist and HugePages code.

I'm not really sure why, but simply using HugePages has resulted in a 100%
speed boost on my ATS instance and it is now happily saturating my 50Mbps
home connection.  Whoo hoo!  :)

My quick question is this:  What happens if ATS goes to allocate a HugePage
and there are simply none left to be had?  Will it crash?

I haven't had this happen, yet, but I'm trying to balance how much memory
I'm willing to give to ATS with other memory left free so I can at least
compile ATS on the same system.

Thanks In Advance,

Adam

Re: Quick Question About Linux HugePages

Posted by "Adam W. Dace" <co...@gmail.com>.
Well, maybe I'm over-exaggerating.  The throughput definitely is only up
about 20% or so, but before I couldn't quite get ATS to really saturate my
home connection...and latency was sort of "okay".  Now it seems that the
proxy is as fast or faster(latency) than web browsing without it.

My setup is a CentOS Linux 7 virtual server, running ATS, working as a
forward-only proxy to my home Internet connection via IPv6.
Right now I have roughly half the server's memory(496MB) allocated at boot
to HugePages.

<adace@antelope:~> cat /proc/meminfo | grep Huge
AnonHugePages:     57344 kB
HugePages_Total:     248
HugePages_Free:       17
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

There are a fair amount of other tweaks I've made as well.  A lot of them
are documented on the Wiki if you want to look.

Regards,

Adam

P.S.  My goal here was to avoid general routing/latency problems that come
along with a generic Comcast Chicago-based connection.
A lot of those are being worked out as the Internet continues to grow, but
I'm going to "keep on keeping on" with this
approach as it seems most of the major websites out there have a
considerable presence in/around New York and this setup seems to help.


On Sat, Jun 27, 2015 at 2:00 PM Phil Sorber <so...@apache.org> wrote:

> On Sat, Jun 27, 2015 at 11:41 AM Phil Sorber <so...@apache.org> wrote:
>
>> On Sat, Jun 27, 2015 at 9:22 AM Leif Hedstrom <zw...@apache.org> wrote:
>>
>>>
>>>
>>>
>>>
>>> > On Jun 27, 2015, at 6:43 AM, Adam W. Dace <co...@gmail.com>
>>> wrote:
>>> >
>>> > First, let me say a big thank you to those who have worked on the
>>> reclaimable freelist and HugePages code.
>>>
>>> Cool! Note that reclaimable freelist is now gone and we will soon
>>> introduce a buddy allocation scheme.
>>>
>>> >
>>> > I'm not really sure why, but simply using HugePages has resulted in a
>>> 100% speed boost on my ATS instance and it is now happily saturating my
>>> 50Mbps home connection.  Whoo hoo!  :)
>>>
>>>
> I forgot to add in my last email, what is your setup here? Great to hear
> such gains, but I would have only expected ~20% in the best case scenario.
>
>
>>
>>> Woah that's incredible. Good job Phil Sorber :)
>>> >
>>> > My quick question is this:  What happens if ATS goes to allocate a
>>> HugePage and there are simply none left to be had?  Will it crash?
>>>
>>> I believe it will just fall back to normal allocations.
>>>
>>
>> That is correct. It will attempt to alloc huge pages every time and fail
>> back to regular malloc if it cannot.
>>
>>
>>>
>>> -- Leif
>>>
>>> >
>>> > I haven't had this happen, yet, but I'm trying to balance how much
>>> memory I'm willing to give to ATS with other memory left free so I can at
>>> least compile ATS on the same system.
>>> >
>>> > Thanks In Advance,
>>> >
>>> > Adam
>>> >
>>>
>>

Re: Quick Question About Linux HugePages

Posted by Phil Sorber <so...@apache.org>.
On Sat, Jun 27, 2015 at 11:41 AM Phil Sorber <so...@apache.org> wrote:

> On Sat, Jun 27, 2015 at 9:22 AM Leif Hedstrom <zw...@apache.org> wrote:
>
>>
>>
>>
>>
>> > On Jun 27, 2015, at 6:43 AM, Adam W. Dace <co...@gmail.com>
>> wrote:
>> >
>> > First, let me say a big thank you to those who have worked on the
>> reclaimable freelist and HugePages code.
>>
>> Cool! Note that reclaimable freelist is now gone and we will soon
>> introduce a buddy allocation scheme.
>>
>> >
>> > I'm not really sure why, but simply using HugePages has resulted in a
>> 100% speed boost on my ATS instance and it is now happily saturating my
>> 50Mbps home connection.  Whoo hoo!  :)
>>
>>
I forgot to add in my last email, what is your setup here? Great to hear
such gains, but I would have only expected ~20% in the best case scenario.


>
>> Woah that's incredible. Good job Phil Sorber :)
>> >
>> > My quick question is this:  What happens if ATS goes to allocate a
>> HugePage and there are simply none left to be had?  Will it crash?
>>
>> I believe it will just fall back to normal allocations.
>>
>
> That is correct. It will attempt to alloc huge pages every time and fail
> back to regular malloc if it cannot.
>
>
>>
>> -- Leif
>>
>> >
>> > I haven't had this happen, yet, but I'm trying to balance how much
>> memory I'm willing to give to ATS with other memory left free so I can at
>> least compile ATS on the same system.
>> >
>> > Thanks In Advance,
>> >
>> > Adam
>> >
>>
>

Re: Quick Question About Linux HugePages

Posted by Phil Sorber <so...@apache.org>.
On Sat, Jun 27, 2015 at 9:22 AM Leif Hedstrom <zw...@apache.org> wrote:

>
>
>
>
> > On Jun 27, 2015, at 6:43 AM, Adam W. Dace <co...@gmail.com>
> wrote:
> >
> > First, let me say a big thank you to those who have worked on the
> reclaimable freelist and HugePages code.
>
> Cool! Note that reclaimable freelist is now gone and we will soon
> introduce a buddy allocation scheme.
>
> >
> > I'm not really sure why, but simply using HugePages has resulted in a
> 100% speed boost on my ATS instance and it is now happily saturating my
> 50Mbps home connection.  Whoo hoo!  :)
>
>
> Woah that's incredible. Good job Phil Sorber :)
> >
> > My quick question is this:  What happens if ATS goes to allocate a
> HugePage and there are simply none left to be had?  Will it crash?
>
> I believe it will just fall back to normal allocations.
>

That is correct. It will attempt to alloc huge pages every time and fail
back to regular malloc if it cannot.


>
> -- Leif
>
> >
> > I haven't had this happen, yet, but I'm trying to balance how much
> memory I'm willing to give to ATS with other memory left free so I can at
> least compile ATS on the same system.
> >
> > Thanks In Advance,
> >
> > Adam
> >
>

Re: Quick Question About Linux HugePages

Posted by Leif Hedstrom <zw...@apache.org>.



> On Jun 27, 2015, at 6:43 AM, Adam W. Dace <co...@gmail.com> wrote:
> 
> First, let me say a big thank you to those who have worked on the reclaimable freelist and HugePages code.

Cool! Note that reclaimable freelist is now gone and we will soon introduce a buddy allocation scheme.

> 
> I'm not really sure why, but simply using HugePages has resulted in a 100% speed boost on my ATS instance and it is now happily saturating my 50Mbps home connection.  Whoo hoo!  :)


Woah that's incredible. Good job Phil Sorber :)
> 
> My quick question is this:  What happens if ATS goes to allocate a HugePage and there are simply none left to be had?  Will it crash?

I believe it will just fall back to normal allocations.

-- Leif 

> 
> I haven't had this happen, yet, but I'm trying to balance how much memory I'm willing to give to ATS with other memory left free so I can at least compile ATS on the same system.
> 
> Thanks In Advance,
> 
> Adam
>