You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by Yunkai Zhang <yu...@gmail.com> on 2012/12/29 18:39:59 UTC

Question about RamCacheLRU

Hi folks:

I'm reading code about RamCacheLRU, but I was confused by RamCacheLRU->lru
queue defined as following:

struct RamCacheLRU: public RamCache {
   ...
   Que(RamCacheLRUEntry, lru_link) lru;
   DList(RamCacheLRUEntry, hash_link) *bucket;
   ...
};

By reading put/get/remove functions of RamCacheLRU class,  it seems that
LRU algorithm was implemented by accessing *bucket* list instead of *lru*
queue.

Do I understand it correctly? if so, we can remove lru queue and relative
code to speed up the put/get function of LRU a bit.

-- 
Yunkai Zhang
Work at Taobao

Re: Question about RamCacheLRU

Posted by Yunkai Zhang <yu...@gmail.com>.
On Sun, Dec 30, 2012 at 6:41 PM, Yunkai Zhang <yu...@gmail.com> wrote:

>
>
>
> On Sun, Dec 30, 2012 at 12:50 PM, Yunkai Zhang <yu...@gmail.com>wrote:
>
>>
>>
>>
>> On Sun, Dec 30, 2012 at 12:17 PM, John Plevyak <jp...@acm.org> wrote:
>>
>>> Lol, If they have optimized it by removing the LRU nature, it was perhaps
>>> overzealous, or perhaps your workload is such that it fits within the RAM
>>> cache so replacement is not an issue.  Without the LRU there are
>>> approximately the same number of buckets as objects, so replacement based
>>> on bucket would be largely random.    Instead we can have a partitioned
>>> LRU, and we are probably going to have to go that route as it looks like
>>> lock contention in the cache is pretty bad.  That's the next area I was
>>> thinking of looking at...
>>>
>>
>>
>> It seems that my coworker split the original one LRU queue into multiple
>> queues according data size(e->data->_size_index). What they do is not to
>> optimize LRU itself, but to reduce memory fragment caused by allocate/free
>> memory frequently.
>>
>> I'll send that patch to here, and hope you can give some advise.
>>
>
>
> Hi jonh:
>
> I have post the path in TS-1006<https://issues.apache.org/jira/browse/TS-1006>issue's page, there is the patch's
> link<https://issues.apache.org/jira/secure/attachment/12562712/0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch>
> .
>

sorry,

s/there is the patch's/here is the path's/

:D


>
> Welcome to give us some advise:)
>
>
>
>>
>>
>>>
>>> cheers,
>>> john
>>>
>>> On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang <yu...@gmail.com>
>>> wrote:
>>>
>>> > On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <jp...@acm.org>
>>> wrote:
>>> >
>>> > > This code in ::put() implements the LRU, and as you can see, it uses
>>> the
>>> > > LRU data structure (i.e. simple list from most recently used to
>>> least):
>>> > >
>>> > >   while (bytes > max_bytes) {
>>> > >     RamCacheLRUEntry *ee = lru.dequeue();
>>> > >     if (ee)
>>> > >       remove(ee);
>>> > >     else
>>> > >       break;
>>> > >   }
>>> > >
>>> >
>>> > It seems that the code I read had been changed on this section you
>>> showed
>>> > above,  as my coworker have optimized it a bit. But thanks for your
>>> > explanation all the same.
>>> >
>>> >
>>> > >
>>> > >
>>> > >
>>> > > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com>
>>> > wrote:
>>> > >
>>> > > > Hi folks:
>>> > > >
>>> > > > I'm reading code about RamCacheLRU, but I was confused by
>>> > > RamCacheLRU->lru
>>> > > > queue defined as following:
>>> > > >
>>> > > > struct RamCacheLRU: public RamCache {
>>> > > >    ...
>>> > > >    Que(RamCacheLRUEntry, lru_link) lru;
>>> > > >    DList(RamCacheLRUEntry, hash_link) *bucket;
>>> > > >    ...
>>> > > > };
>>> > > >
>>> > > > By reading put/get/remove functions of RamCacheLRU class,  it seems
>>> > that
>>> > > > LRU algorithm was implemented by accessing *bucket* list instead of
>>> > *lru*
>>> > > > queue.
>>> > > >
>>> > > > Do I understand it correctly? if so, we can remove lru queue and
>>> > relative
>>> > > > code to speed up the put/get function of LRU a bit.
>>> > > >
>>> > > > --
>>> > > > Yunkai Zhang
>>> > > > Work at Taobao
>>> > > >
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > Yunkai Zhang
>>> > Work at Taobao
>>> >
>>>
>>
>>
>>
>> --
>> Yunkai Zhang
>> Work at Taobao
>>
>
>
>
> --
> Yunkai Zhang
> Work at Taobao
>



-- 
Yunkai Zhang
Work at Taobao

Re: Question about RamCacheLRU

Posted by Yunkai Zhang <yu...@gmail.com>.
On Sun, Dec 30, 2012 at 12:50 PM, Yunkai Zhang <yu...@gmail.com> wrote:

>
>
>
> On Sun, Dec 30, 2012 at 12:17 PM, John Plevyak <jp...@acm.org> wrote:
>
>> Lol, If they have optimized it by removing the LRU nature, it was perhaps
>> overzealous, or perhaps your workload is such that it fits within the RAM
>> cache so replacement is not an issue.  Without the LRU there are
>> approximately the same number of buckets as objects, so replacement based
>> on bucket would be largely random.    Instead we can have a partitioned
>> LRU, and we are probably going to have to go that route as it looks like
>> lock contention in the cache is pretty bad.  That's the next area I was
>> thinking of looking at...
>>
>
>
> It seems that my coworker split the original one LRU queue into multiple
> queues according data size(e->data->_size_index). What they do is not to
> optimize LRU itself, but to reduce memory fragment caused by allocate/free
> memory frequently.
>
> I'll send that patch to here, and hope you can give some advise.
>


Hi jonh:

I have post the path in
TS-1006<https://issues.apache.org/jira/browse/TS-1006>issue's page,
there is the patch's
link<https://issues.apache.org/jira/secure/attachment/12562712/0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch>
.

Welcome to give us some advise:)



>
>
>>
>> cheers,
>> john
>>
>> On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang <yu...@gmail.com>
>> wrote:
>>
>> > On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <jp...@acm.org> wrote:
>> >
>> > > This code in ::put() implements the LRU, and as you can see, it uses
>> the
>> > > LRU data structure (i.e. simple list from most recently used to
>> least):
>> > >
>> > >   while (bytes > max_bytes) {
>> > >     RamCacheLRUEntry *ee = lru.dequeue();
>> > >     if (ee)
>> > >       remove(ee);
>> > >     else
>> > >       break;
>> > >   }
>> > >
>> >
>> > It seems that the code I read had been changed on this section you
>> showed
>> > above,  as my coworker have optimized it a bit. But thanks for your
>> > explanation all the same.
>> >
>> >
>> > >
>> > >
>> > >
>> > > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com>
>> > wrote:
>> > >
>> > > > Hi folks:
>> > > >
>> > > > I'm reading code about RamCacheLRU, but I was confused by
>> > > RamCacheLRU->lru
>> > > > queue defined as following:
>> > > >
>> > > > struct RamCacheLRU: public RamCache {
>> > > >    ...
>> > > >    Que(RamCacheLRUEntry, lru_link) lru;
>> > > >    DList(RamCacheLRUEntry, hash_link) *bucket;
>> > > >    ...
>> > > > };
>> > > >
>> > > > By reading put/get/remove functions of RamCacheLRU class,  it seems
>> > that
>> > > > LRU algorithm was implemented by accessing *bucket* list instead of
>> > *lru*
>> > > > queue.
>> > > >
>> > > > Do I understand it correctly? if so, we can remove lru queue and
>> > relative
>> > > > code to speed up the put/get function of LRU a bit.
>> > > >
>> > > > --
>> > > > Yunkai Zhang
>> > > > Work at Taobao
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > Yunkai Zhang
>> > Work at Taobao
>> >
>>
>
>
>
> --
> Yunkai Zhang
> Work at Taobao
>



-- 
Yunkai Zhang
Work at Taobao

Re: Question about RamCacheLRU

Posted by Yunkai Zhang <yu...@gmail.com>.
On Sun, Dec 30, 2012 at 12:17 PM, John Plevyak <jp...@acm.org> wrote:

> Lol, If they have optimized it by removing the LRU nature, it was perhaps
> overzealous, or perhaps your workload is such that it fits within the RAM
> cache so replacement is not an issue.  Without the LRU there are
> approximately the same number of buckets as objects, so replacement based
> on bucket would be largely random.    Instead we can have a partitioned
> LRU, and we are probably going to have to go that route as it looks like
> lock contention in the cache is pretty bad.  That's the next area I was
> thinking of looking at...
>


It seems that my coworker split the original one LRU queue into multiple
queues according data size(e->data->_size_index). What they do is not to
optimize LRU itself, but to reduce memory fragment caused by allocate/free
memory frequently.

I'll send that patch to here, and hope you can give some advise.



>
> cheers,
> john
>
> On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang <yu...@gmail.com> wrote:
>
> > On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <jp...@acm.org> wrote:
> >
> > > This code in ::put() implements the LRU, and as you can see, it uses
> the
> > > LRU data structure (i.e. simple list from most recently used to least):
> > >
> > >   while (bytes > max_bytes) {
> > >     RamCacheLRUEntry *ee = lru.dequeue();
> > >     if (ee)
> > >       remove(ee);
> > >     else
> > >       break;
> > >   }
> > >
> >
> > It seems that the code I read had been changed on this section you showed
> > above,  as my coworker have optimized it a bit. But thanks for your
> > explanation all the same.
> >
> >
> > >
> > >
> > >
> > > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com>
> > wrote:
> > >
> > > > Hi folks:
> > > >
> > > > I'm reading code about RamCacheLRU, but I was confused by
> > > RamCacheLRU->lru
> > > > queue defined as following:
> > > >
> > > > struct RamCacheLRU: public RamCache {
> > > >    ...
> > > >    Que(RamCacheLRUEntry, lru_link) lru;
> > > >    DList(RamCacheLRUEntry, hash_link) *bucket;
> > > >    ...
> > > > };
> > > >
> > > > By reading put/get/remove functions of RamCacheLRU class,  it seems
> > that
> > > > LRU algorithm was implemented by accessing *bucket* list instead of
> > *lru*
> > > > queue.
> > > >
> > > > Do I understand it correctly? if so, we can remove lru queue and
> > relative
> > > > code to speed up the put/get function of LRU a bit.
> > > >
> > > > --
> > > > Yunkai Zhang
> > > > Work at Taobao
> > > >
> > >
> >
> >
> >
> > --
> > Yunkai Zhang
> > Work at Taobao
> >
>



-- 
Yunkai Zhang
Work at Taobao

Re: Question about RamCacheLRU

Posted by John Plevyak <jp...@acm.org>.
Lol, If they have optimized it by removing the LRU nature, it was perhaps
overzealous, or perhaps your workload is such that it fits within the RAM
cache so replacement is not an issue.  Without the LRU there are
approximately the same number of buckets as objects, so replacement based
on bucket would be largely random.    Instead we can have a partitioned
LRU, and we are probably going to have to go that route as it looks like
lock contention in the cache is pretty bad.  That's the next area I was
thinking of looking at...

cheers,
john

On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang <yu...@gmail.com> wrote:

> On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <jp...@acm.org> wrote:
>
> > This code in ::put() implements the LRU, and as you can see, it uses the
> > LRU data structure (i.e. simple list from most recently used to least):
> >
> >   while (bytes > max_bytes) {
> >     RamCacheLRUEntry *ee = lru.dequeue();
> >     if (ee)
> >       remove(ee);
> >     else
> >       break;
> >   }
> >
>
> It seems that the code I read had been changed on this section you showed
> above,  as my coworker have optimized it a bit. But thanks for your
> explanation all the same.
>
>
> >
> >
> >
> > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com>
> wrote:
> >
> > > Hi folks:
> > >
> > > I'm reading code about RamCacheLRU, but I was confused by
> > RamCacheLRU->lru
> > > queue defined as following:
> > >
> > > struct RamCacheLRU: public RamCache {
> > >    ...
> > >    Que(RamCacheLRUEntry, lru_link) lru;
> > >    DList(RamCacheLRUEntry, hash_link) *bucket;
> > >    ...
> > > };
> > >
> > > By reading put/get/remove functions of RamCacheLRU class,  it seems
> that
> > > LRU algorithm was implemented by accessing *bucket* list instead of
> *lru*
> > > queue.
> > >
> > > Do I understand it correctly? if so, we can remove lru queue and
> relative
> > > code to speed up the put/get function of LRU a bit.
> > >
> > > --
> > > Yunkai Zhang
> > > Work at Taobao
> > >
> >
>
>
>
> --
> Yunkai Zhang
> Work at Taobao
>

Re: Question about RamCacheLRU

Posted by Yunkai Zhang <yu...@gmail.com>.
On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <jp...@acm.org> wrote:

> This code in ::put() implements the LRU, and as you can see, it uses the
> LRU data structure (i.e. simple list from most recently used to least):
>
>   while (bytes > max_bytes) {
>     RamCacheLRUEntry *ee = lru.dequeue();
>     if (ee)
>       remove(ee);
>     else
>       break;
>   }
>

It seems that the code I read had been changed on this section you showed
above,  as my coworker have optimized it a bit. But thanks for your
explanation all the same.


>
>
>
> On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com> wrote:
>
> > Hi folks:
> >
> > I'm reading code about RamCacheLRU, but I was confused by
> RamCacheLRU->lru
> > queue defined as following:
> >
> > struct RamCacheLRU: public RamCache {
> >    ...
> >    Que(RamCacheLRUEntry, lru_link) lru;
> >    DList(RamCacheLRUEntry, hash_link) *bucket;
> >    ...
> > };
> >
> > By reading put/get/remove functions of RamCacheLRU class,  it seems that
> > LRU algorithm was implemented by accessing *bucket* list instead of *lru*
> > queue.
> >
> > Do I understand it correctly? if so, we can remove lru queue and relative
> > code to speed up the put/get function of LRU a bit.
> >
> > --
> > Yunkai Zhang
> > Work at Taobao
> >
>



-- 
Yunkai Zhang
Work at Taobao

Re: Question about RamCacheLRU

Posted by John Plevyak <jp...@acm.org>.
This code in ::put() implements the LRU, and as you can see, it uses the
LRU data structure (i.e. simple list from most recently used to least):

  while (bytes > max_bytes) {
    RamCacheLRUEntry *ee = lru.dequeue();
    if (ee)
      remove(ee);
    else
      break;
  }



On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <yu...@gmail.com> wrote:

> Hi folks:
>
> I'm reading code about RamCacheLRU, but I was confused by RamCacheLRU->lru
> queue defined as following:
>
> struct RamCacheLRU: public RamCache {
>    ...
>    Que(RamCacheLRUEntry, lru_link) lru;
>    DList(RamCacheLRUEntry, hash_link) *bucket;
>    ...
> };
>
> By reading put/get/remove functions of RamCacheLRU class,  it seems that
> LRU algorithm was implemented by accessing *bucket* list instead of *lru*
> queue.
>
> Do I understand it correctly? if so, we can remove lru queue and relative
> code to speed up the put/get function of LRU a bit.
>
> --
> Yunkai Zhang
> Work at Taobao
>