You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Perrin Harkins <pe...@elem.com> on 2005/08/24 22:27:50 UTC

Re: maintaining shared memory size (was: Re: swampedwithconnection?)

On Wed, 2005-08-24 at 17:49 +1000, Badai Aqrandista wrote:
> I have put a reverse procy in front of my mod_perl servers and I have set 
> MaxClient to 30. I have tried setting it to 50, but it slows down the 
> response time.

Are you running out of memory when you set it to 50?  That's what you
are trying to avoid by setting MaxClients.  Also, you are sending your
load test through the proxy server, and that has a much higher
MaxClients setting, right?

> I ran httperf to create 50 connections. This is the result:
> 
> 1 connections per second  -> avg reply time = 103953.7 ms
> 10 connections per second -> avg reply time = 123167.2 ms
> 20 connections per second -> avg reply time = 121483.7 ms
> 30 connections per second -> avg reply time = 114411.3 ms
> 40 connections per second -> avg reply time = 130168.7 ms
> 50 connections per second -> avg reply time = 130457.4 ms

That looks pretty good to me.  It scales better than linearly, i.e 50
conns is not 50 times slower than 1 conn.

> When only creating 1 connection, the avg reply time = 3289.4 ms

That doesn't seem to be what your numbers above say.

> I have no idea where to start fixing this. It seems that the more connection 
> there is, the higher the avg reply time.

That's true for any web server.  More connections means the machine is
doing more work and running more processes, so of course it will be
slower.  All you need to do is make sure you are not running out of
memory and going into swap.

If the performance you get is not enough for you, look for the
bottleneck.  This could be running out of CPU, running out of memory, or
contention for a shared resource like a database.  Then you fix it, by
changing code or buying hardware.

- Perrin


Re: maintaining shared memory size (was: Re:swampedwithconnection?)

Posted by Badai Aqrandista <ba...@hotmail.com>.
Hi all,

>If the performance you get is not enough for you, look for the
>bottleneck.  This could be running out of CPU, running out of memory, or
>contention for a shared resource like a database.  Then you fix it, by
>changing code or buying hardware.

I'll try that...

Thanks for helping me out... Especially perrin... I'll let you know how it 
goes...

---
Badai Aqrandista
Cheepy (?)

_________________________________________________________________
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.    
http://ninemsn.seek.com.au?hotmail


Re: maintaining shared memory size (was: Re:swampedwithconnection?)

Posted by Perrin Harkins <pe...@elem.com>.
On Thu, 2005-08-25 at 17:46 +1000, Badai Aqrandista wrote:
> To reduce memory consumption -> look for ways to do more things on compile 
> time

Also avoid loading large things into memory all at once (e.g. slurping
files), pass references instead of copies, use the disk more, etc. 

> To reduce CPU usage -> write the most called subroutine in C

More likely would be to look at the approach you used in the slow code
(which you find with profiling) and see if you can change the algorithm
to something that does less work.  That usually gives the largest gains.
Sometimes you may be using a CPAN module (LWP, Cache::Cache) which has a
faster alternative.  Sometimes caching objects (DBI statement handles
for example) helps.

> To reduce database contention -> cache as much as possible

That can help with slow read-only queries.  For actual contention (which
implies writes and locks) you usually need to either change your access
patterns or use a different database (InnoDB tables in MySQL, Postgres,
etc.)

- Perrin


Re: maintaining shared memory size (was: Re:swampedwithconnection?)

Posted by Badai Aqrandista <ba...@hotmail.com>.
>If the performance you get is not enough for you, look for the
>bottleneck.  This could be running out of CPU, running out of memory, or
>contention for a shared resource like a database.  Then you fix it, by
>changing code or buying hardware.

I think I've got the point now:

To reduce memory consumption -> look for ways to do more things on compile 
time
To reduce CPU usage -> write the most called subroutine in C
To reduce database contention -> cache as much as possible

Thanks for all your help...
The journey just begins...

---
Badai Aqrandista
Cheepy (?)

_________________________________________________________________
Dating? Try Lavalife – get 7 days FREE! Sign up NOW. 
http://www.lavalife.com/clickthru/clickthru.act?id=ninemsn&context=an99&a=20233&locale=en_AU&_t=33473