You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by "Belmon, Stephane" <sb...@websense.com> on 2009/11/05 00:24:12 UTC

partitions

Hello YTS folks,

Yesterday (or was that on IRC?) I think it was stated that once around the ring, a partition gets dropped (rather than compacted I assume). Could you elaborate a bit on how the cache actually reclaims space in the current version? (I'll take my answer off the air ;-) especially if you want to compare and contrast with how it used to work, but I thought more people could be interested).

--
Stephane Belmon



 Protected by Websense Hosted Email Security -- www.websense.com 

RE: partitions

Posted by "Belmon, Stephane" <sb...@websense.com>.
Right -- the real problem with log-structured filesystems is always reclaiming space from the old log.

Excellent, that makes a lot of sense.

--
Stephane Belmon sbelmon@websense.com
Principal Software Engineer
Websense, inc.
________________________________________
From: Leif Hedstrom [leif@ogre.com]
Sent: Wednesday, November 04, 2009 5:42 PM
To: trafficserver-dev@incubator.apache.org
Subject: Re: partitions

On Nov 4, 2009, at 4:24 PM, Belmon, Stephane wrote:

> Hello YTS folks,
>
> Yesterday (or was that on IRC?) I think it was stated that once
> around the ring, a partition gets dropped (rather than compacted I
> assume). Could you elaborate a bit on how the cache actually
> reclaims space in the current version? (I'll take my answer off the
> air ;-) especially if you want to compare and contrast with how it
> used to work, but I thought more people could be interested).

The disk cache gets divided up into "blocks", 8GB each. On top of each
8GB chunk, the allocated RAM cache gets split up, and there's an LRU
on to of that (so often accessed objects in an 8GB chunk gets served
out of RAM cache). In addition, there's an in memory index for all the
"slots" in the disk cache.

As you write to disk cache, you consume space out of the first (or
current) 8GB block. Once it's filled up, we move on to the next one.
Once you fill the last 8GB block, we start over from the beginning
again, freeing the first 8GB chunk in one action. This means cache
eviction is done 8GB at a time.

The reason for this design was simplicity and speed. There is no meta
data involved managing the disk cache. The in-memory foot print for
the disk cache is (if I recall) 8 bytes per object, and this is
preallocated based on

     proxy.config.cache.min_average_object_size


This is 8000 by default. You can reduce the amount of space consumed
for in-memory indices by increasing this value, but that also means
you can store fewer objects. But if all your objects are say 32k or
larger, you are definitely better off setting the above setting
accordingly.

I hope I haven't got any of this wrong, and hopefully it makes sense. :)

-- leif



 Protected by Websense Hosted Email Security -- www.websense.com 

Re: partitions

Posted by Leif Hedstrom <le...@ogre.com>.
On Nov 4, 2009, at 4:24 PM, Belmon, Stephane wrote:

> Hello YTS folks,
>
> Yesterday (or was that on IRC?) I think it was stated that once  
> around the ring, a partition gets dropped (rather than compacted I  
> assume). Could you elaborate a bit on how the cache actually  
> reclaims space in the current version? (I'll take my answer off the  
> air ;-) especially if you want to compare and contrast with how it  
> used to work, but I thought more people could be interested).

The disk cache gets divided up into "blocks", 8GB each. On top of each  
8GB chunk, the allocated RAM cache gets split up, and there's an LRU  
on to of that (so often accessed objects in an 8GB chunk gets served  
out of RAM cache). In addition, there's an in memory index for all the  
"slots" in the disk cache.

As you write to disk cache, you consume space out of the first (or  
current) 8GB block. Once it's filled up, we move on to the next one.  
Once you fill the last 8GB block, we start over from the beginning  
again, freeing the first 8GB chunk in one action. This means cache  
eviction is done 8GB at a time.

The reason for this design was simplicity and speed. There is no meta  
data involved managing the disk cache. The in-memory foot print for  
the disk cache is (if I recall) 8 bytes per object, and this is  
preallocated based on

     proxy.config.cache.min_average_object_size


This is 8000 by default. You can reduce the amount of space consumed  
for in-memory indices by increasing this value, but that also means  
you can store fewer objects. But if all your objects are say 32k or  
larger, you are definitely better off setting the above setting  
accordingly.

I hope I haven't got any of this wrong, and hopefully it makes sense. :)

-- leif