You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by "Noel J. Bergman" <no...@devtech.com> on 2002/10/16 10:08:58 UTC

Additional Load Test Data

This summarizes testing data for the two experimental builds.  The numbers
reported here are with JDK 1.3.1 Hotspot Server.  I have attached the raw
data log, which also includes the same tests using Hotspot Client.
Interestingly, the behavior is almost identical; except for a significant
initial hit when first starting to use the James classes.

./postal -m 1 -p 1 -c 50000 amitower userlist - | tee -a test.log
time,messages,data(K),errors,connections,SSL connections

Memory %P    Scheduler Build          Watchdog Build      %P  Memory
--------------------------------------------------------------------
129388    -- Before Phoenix --     -- Before Phoenix --       147768
112948    -- Before Testing --     -- Before Testing --       134200
107764 48 00:41,1944,1669,0,1,0    01:01,1814,1556,0,1,0  43  125048
103972 58                                                 50  124860
 99736 18 00:42,2024,1741,0,0,0    01:02,1987,1710,0,0,0  26  124860
 95420 20                                                 19  119172
 91148 25 00:43,2049,1746,0,0,0    01:03,2051,1755,0,0,0  15  115208
 86804 22                                                 14  111048
 82424 23 00:44,2061,1753,0,0,0    01:04,2057,1749,0,0,0  14  106844
 78048 25                                                 14  102572
 73672 24 00:45,2066,1747,0,0,0    01:05,2048,1742,0,0,0  15   98224
 69296 25                                                 14   93876
 64948 24 00:46,2062,1743,0,0,0    01:06,2053,1748,0,0,0  14   89528
 60548 24                                                 14   85180
 56172 25 00:47,2065,1763,0,0,0    01:07,2055,1753,0,0,0  14   80812
 51796 24                                                 14   76468
 47424 23 00:48,2062,1751,0,0,0    01:08,2068,1795,0,0,0  14   72136
 43056 24                                                 14   67748
 38656 24 00:49,2069,1760,0,0,0    01:09,2056,1758,0,0,0  15   63400
 34292 24                                                 14   59032
 29596 23 00:50,2077,1773,0,0,0    01:10,2042,1738,0,0,0  15   54624
 25212 24                                                 13   50256
 20784 24 00:51,2086,1813,0,0,0    01:11,2053,1771,0,0,0  15   45912
 16340 23                                                 15   41532
 11952 24 00:52,2082,1808,0,0,0    01:12,2051,1768,0,0,0  14   37228
  7552 24                                                 14   32840
  3408 23 00:53,2070,1786,0,1,0    01:13,2057,1764,0,1,0  14   28504
  3188 28                                                 13   24176
  3116 24 00:54,2060,1753,0,0,0    01:14,2058,1775,0,0,0  12   19812
  3072 22                                                 13   15444
  3188 22 00:55,2066,1760,0,0,0    01:15,2049,1754,0,0,0  12   14388

Basically, we're losing roughly 8.5K - 9K per 2050 1K messages.  The test
uses a single connection which never closes (I push 50,000 1K messages
through it, and it doesn't run to completion before I stop the test).

I want to stress that this behavior is common to both builds and both JITs.

Getting the same results for this test as compared with the previous test
would seem to indicate that it is not dependent upon the number of
connections.  The next test will use smaller numbers of large messages to
see if it is message count or traffic that is impacting us.

	--- Noel

RE: Additional Load Test Data (correction and addition)

Posted by "Noel J. Bergman" <no...@devtech.com>.
> Basically, we're losing roughly 8.5K - 9K per 2050 1K messages.

Brain cramp ... but it's 4AM.  Not KBytes.  MBytes.  Those numbers are
already in KBytes.  We're losing 8.5 to 9 megabytes per 2050 1K messages.
And I can now report the following results before calling it a night:

./postal -m 5000 -p 1 -c 1000 amitower userlist - | tee -a test.log
time,messages,data(K),errors,connections,SSL connections
04:14,11,26533,0,1,0
04:15,10,26628,0,0,0
04:16,10,26966,0,0,0
04:17,13,26884,0,0,0
04:18,12,27075,0,0,0
04:19,10,27096,0,0,0

$ vmstat 30
   procs                      memory    swap          io     system
cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy
id
 2  0  0  97532  89636  39184  26020   0   0     0   259 1413  1809  24   4
72
 2  0  0  97532  87456  41148  26020   0   0     0   319 1627  1950  25   6
70
12  0  0  97532  75352  52348  26020   0   0     0   458 1888  1900  27   5
68
12  0  0  97532  70292  57028  26020   0   0     0   365 1694  1907  25   5
70
 8  0  0  97532  70288  57028  26020   0   0     0   458 1889  1914  25   6
69
 2  0  0  97532  70280  57028  26020   0   0     0   413 1803  1907  26   6
68
12  0  0  97532  70272  57028  26020   0   0     0   316 1610  1930  25   5
70
12  0  0  97532  70268  57028  26020   0   0     0   455 1882  1893  25   7
68
12  0  0  97532  70268  57028  26020   0   0     0   368 1714  1916  25   5
70
12  0  0  97532  70264  57028  26020   0   0     0   365 1721  1947  25   5
69
 5  0  0  97532  70260  57028  26020   0   0     0   460 1902  1897  25   6
68
 2  0  0  97532  70252  57028  26020   0   0     0   360 1704  1932  24   5
71

That was a quick 6 minute test with large messages, a single connection, and
relatively few messages.  We are losing 4K to 8K per 30 second check, so
between 8K and 16K per 10 5000 KByte messages (I attribute the initial
larger loss to the requisite memory buffer required to hold storage).  And
yes, this time I do mean KBytes, not MBytes.  :-)

The JIT for that test was Hotspot Client.

	--- Noel


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>