You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@trafficserver.apache.org by Mike Partridge <pa...@seamicro.com> on 2011/06/10 15:54:52 UTC

Benchmarking ATS

Is there an easy method to artificially vary the cache hit/miss ratio 
that people would recommend.  I am currently just generating more random 
content then can be cached by ATS?
This is what I was in process of doing, but was curious if there was a 
better method others may have used.  I am trying to do this to benchmark 
ATS at different cache hit/miss ratios.

Thanks
-Mike

Re: Benchmarking ATS

Posted by "ming.zym@gmail.com" <mi...@gmail.com>.
we use the http_load to do pre deployment simulation testing. it is a
good tool for stress testing with big volume cache system.

the qps is depends on your situation, ie how many disks etc.


在 2011-06-12日的 16:13 -0700,T Savarkar写道:
> On this thread, what tools can expert users recommend to test
> trafficserver for capacity, e.g. # reqs/sec - use httperf?
> 
> Tri
> 
> On Fri, Jun 10, 2011 at 11:03 AM, John Plevyak <jp...@gmail.com>
> wrote:
>         
>         There is also a question of RAM hit VS non-RAM hit.  RAM hits
>         incur no seeks.
>         Miss writes are aggregated so misses are constrained by disk
>         write bandwidth.
>         Non-RAM hits require seeks (approx 1 seek / MB) and that is
>         what typically constrains
>         performance for those operations.
>         
>         Unless you have mostly RAM hits, a large number of disks or
>         very little CPU you
>         will probably be disk or network constrained.
>         
>         I use a synthetic server with new URLs for misses and select
>         from a "hotset"
>         for hits which is either sized to fit in RAM not depending on
>         the type of test.
>         
>         More sophisticated techniques often use a Zipf distribution,
>         although there
>         is some controversy over how well that models actual traffic.
>         You could
>         also use logs to build a synthetic request stream which better
>         models your
>         traffic, but then network delay issues and peculiarities
>         (dropped packets,
>         MTU, etc.)  could be modeled as well and you are down the
>         rabbit hole.
>         
>         
>         cheers,
>         john
>         
>         
>         
>         On Fri, Jun 10, 2011 at 10:17 AM, sridhar basam
>         <sr...@basam.org> wrote:
>                 
>                 
>                 On Fri, Jun 10, 2011 at 9:54 AM, Mike Partridge
>                 <pa...@seamicro.com> wrote:
>                 
>                         Is there an easy method to artificially vary
>                         the cache hit/miss ratio that people would
>                         recommend.  I am currently just generating
>                         more random content then can be cached by ATS?
>                         This is what I was in process of doing, but
>                         was curious if there was a better method
>                         others may have used.  I am trying to do this
>                         to benchmark ATS at different cache hit/miss
>                         ratios.
>                 
>                 
>                 Hit/miss rates are determined by cache size and the
>                 ratio of requests incoming that are cachable. Using a
>                 combination of the 2, should you be able to vary the
>                 cache hit/miss rate.
>                 
>                 
>                  Sridhar
>         
>         
> 



Re: Benchmarking ATS

Posted by T Savarkar <ts...@gmail.com>.
On this thread, what tools can expert users recommend to test trafficserver
for capacity, e.g. # reqs/sec - use httperf?

Tri

On Fri, Jun 10, 2011 at 11:03 AM, John Plevyak <jp...@gmail.com> wrote:

>
> There is also a question of RAM hit VS non-RAM hit.  RAM hits incur no
> seeks.
> Miss writes are aggregated so misses are constrained by disk write
> bandwidth.
> Non-RAM hits require seeks (approx 1 seek / MB) and that is what typically
> constrains
> performance for those operations.
>
> Unless you have mostly RAM hits, a large number of disks or very little CPU
> you
> will probably be disk or network constrained.
>
> I use a synthetic server with new URLs for misses and select from a
> "hotset"
> for hits which is either sized to fit in RAM not depending on the type of
> test.
>
> More sophisticated techniques often use a Zipf distribution, although there
> is some controversy over how well that models actual traffic.   You could
> also use logs to build a synthetic request stream which better models your
> traffic, but then network delay issues and peculiarities (dropped packets,
> MTU, etc.)  could be modeled as well and you are down the rabbit hole.
>
>
> cheers,
> john
>
>
> On Fri, Jun 10, 2011 at 10:17 AM, sridhar basam <sr...@basam.org> wrote:
>
>>
>> On Fri, Jun 10, 2011 at 9:54 AM, Mike Partridge <pa...@seamicro.com>wrote:
>>
>>> Is there an easy method to artificially vary the cache hit/miss ratio
>>> that people would recommend.  I am currently just generating more random
>>> content then can be cached by ATS?
>>> This is what I was in process of doing, but was curious if there was a
>>> better method others may have used.  I am trying to do this to benchmark ATS
>>> at different cache hit/miss ratios.
>>
>>
>> Hit/miss rates are determined by cache size and the ratio of requests
>> incoming that are cachable. Using a combination of the 2, should you be able
>> to vary the cache hit/miss rate.
>>
>>  Sridhar
>>
>
>

Re: Benchmarking ATS

Posted by John Plevyak <jp...@gmail.com>.
There is also a question of RAM hit VS non-RAM hit.  RAM hits incur no
seeks.
Miss writes are aggregated so misses are constrained by disk write
bandwidth.
Non-RAM hits require seeks (approx 1 seek / MB) and that is what typically
constrains
performance for those operations.

Unless you have mostly RAM hits, a large number of disks or very little CPU
you
will probably be disk or network constrained.

I use a synthetic server with new URLs for misses and select from a "hotset"
for hits which is either sized to fit in RAM not depending on the type of
test.

More sophisticated techniques often use a Zipf distribution, although there
is some controversy over how well that models actual traffic.   You could
also use logs to build a synthetic request stream which better models your
traffic, but then network delay issues and peculiarities (dropped packets,
MTU, etc.)  could be modeled as well and you are down the rabbit hole.


cheers,
john

On Fri, Jun 10, 2011 at 10:17 AM, sridhar basam <sr...@basam.org> wrote:

>
> On Fri, Jun 10, 2011 at 9:54 AM, Mike Partridge <pa...@seamicro.com>wrote:
>
>> Is there an easy method to artificially vary the cache hit/miss ratio that
>> people would recommend.  I am currently just generating more random content
>> then can be cached by ATS?
>> This is what I was in process of doing, but was curious if there was a
>> better method others may have used.  I am trying to do this to benchmark ATS
>> at different cache hit/miss ratios.
>
>
> Hit/miss rates are determined by cache size and the ratio of requests
> incoming that are cachable. Using a combination of the 2, should you be able
> to vary the cache hit/miss rate.
>
>  Sridhar
>

Re: Benchmarking ATS

Posted by sridhar basam <sr...@basam.org>.
On Fri, Jun 10, 2011 at 9:54 AM, Mike Partridge <pa...@seamicro.com>wrote:

> Is there an easy method to artificially vary the cache hit/miss ratio that
> people would recommend.  I am currently just generating more random content
> then can be cached by ATS?
> This is what I was in process of doing, but was curious if there was a
> better method others may have used.  I am trying to do this to benchmark ATS
> at different cache hit/miss ratios.


Hit/miss rates are determined by cache size and the ratio of requests
incoming that are cachable. Using a combination of the 2, should you be able
to vary the cache hit/miss rate.

 Sridhar