You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by "Noel J. Bergman" <no...@devtech.com> on 2002/10/16 06:36:15 UTC
Status of James load tests
Here are the results of two load tests. Basically, they suffer from a
steady degradation in performance due to memory loss. As you can see from
vmstat, running every 30 seconds during the tests, there is a steady
reduction in available memory, resulting in a steady shift from CPU working
for the user to CPU working for the system. Nothing particularly useful in
the logs right now, although I notice that both Harmeet's build and Peter's
build use the new connection manager.
The results of these tests are similar using either the Watchdog or the
Scheduler. It may be that the smtphandler / connection manager cycle has
issues, or it could be something else. In both cases, the sendMail call in
the SMTPHandler is commented out, so that should be isolating our test to
just the handler, and related classes. I'm going to make test with 1000s of
messages per connection instead of 2, and see if it is messages or
connections that cause the problem.
Clearly, we want to get a handle on this issue.
--- Noel
--------------------------------------------------------------------
$ ./postal -m 1 -p 20 -c 2 [server] userlist - | tee -a test.log
time,messages,data(K),errors,connections,SSL connections
Harmeet Build Peter Build
23:09,1671,1449,0,1129,0 22:22,1612,1422,0,1106,0
23:10,1965,1705,27,1381,0 22:23,1941,1682,9,1320,0
23:11,2113,1848,4,1431,0 22:24,1966,1691,1,1307,0
23:12,2143,1866,5,1426,0 22:25,2065,1795,0,1381,0
23:13,1954,1689,0,1292,0 22:26,1940,1688,0,1296,
23:14,1852,1626,0,1235,0 22:27,1844,1589,0,1229,0
23:15,1738,1489,0,1153,0 22:28,1739,1525,0,1155,0
23:16,1634,1428,0,1073,0 22:29,1621,1396,0,1078,0
23:17,1558,1354,2,1041,0 22:30,1541,1340,0,1013,0
23:18,1462,1271,0,974,0 22:31,1483,1292,0,983,0
23:19,1409,1211,0,940,0 22:32,1404,1217,0,936,0
23:20,1325,1149,0,903,0 22:33,1345,1151,0,899,
23:21,1296,1144,0,868,0 22:34,1317,1149,0,856,0
23:22,1223,1058,0,836,0 22:35,1258,1093,0,836,0
--------------------------------------------------------------------
[This particular set of numbers is with Harmeet's build]
$ vmstat 30
procs memory swap io system
cpu
r b w swpd free buff cache si so bi bo in cs us sy
id
2 0 0 88652 107584 31728 29160 0 0 0 0 5 7 0 0
7
13 0 0 88652 100984 31728 29160 0 0 2 30 814 1329 86 14
0
12 0 0 88652 97332 31728 29160 0 0 2 33 856 1423 85 15
0
14 1 0 88652 93812 31728 29160 0 0 3 46 1008 1489 78 22
0
11 0 0 88652 88888 31728 29160 0 0 3 47 988 1496 74 26
0
16 0 0 88652 77652 31728 29160 0 0 3 45 987 1445 74 26
0
14 0 0 88652 75372 31728 29160 0 0 0 55 1020 1469 72 28
0
25 0 0 88652 71792 31728 29160 0 0 0 52 970 1443 68 32
0
25 0 0 88652 67392 31728 29160 0 0 2 45 917 1386 65 35
0
22 0 0 88652 62612 31728 29160 0 0 4 51 935 1317 62 38
0
3 0 0 88652 57788 31728 29160 0 0 3 49 883 1270 55 45
0
3 0 0 88652 52176 32760 29160 0 0 3 46 871 1253 54 46
0
9 0 0 88652 46356 34120 29160 0 0 3 47 861 1261 52 48
0
19 0 0 88652 40568 35408 29160 0 0 3 46 829 1215 50 50
0
5 0 0 88652 34984 36656 29160 0 0 3 44 789 1175 49 51
0
8 0 0 88652 29336 37960 29160 0 0 3 52 797 1173 47 53
0
16 0 0 88652 24108 39156 29160 0 0 3 43 776 1159 47 53
0
16 0 0 88652 19128 40332 29160 0 0 3 42 756 1162 46 54
0
19 0 0 88652 13760 41512 29160 0 0 3 41 736 1106 42 58
0
19 0 0 88652 9112 42632 29160 0 0 3 38 725 1083 42 58
0
18 0 0 88652 4220 43712 29160 0 0 3 39 699 1064 41 59
0
19 0 0 88652 7516 44400 21316 0 0 3 39 686 1050 40 60
0
12 0 0 88652 3152 45292 21204 0 0 3 44 695 1027 40 60
0
11 0 0 88796 7392 46048 21356 0 5 5 33 677 1025 38 62
0
11 0 0 88796 3104 46892 21284 0 0 2 36 653 987 36 64
0
9 0 0 90064 3032 45260 21552 0 42 2 51 736 970 38 62
0
---- test stopped here ----
21 0 0 90064 3028 42528 20648 0 0 2 33 635 965 36 64
0
27 0 0 90064 3036 40424 19784 0 0 3 42 639 941 36 64
0
8 0 0 90064 3076 37596 19380 0 0 2 30 581 908 34 66
0
5 0 0 90064 3244 37212 19380 0 0 0 17 134 607 14 86
0
12 0 0 90064 3116 37212 19380 0 0 1 15 132 643 11 89
0
2 0 0 90064 3232 36828 19380 0 0 2 11 126 637 9 91
1
12 0 0 90064 3116 36572 19380 0 0 1 11 125 633 11 89
0
12 0 0 90064 3184 36060 19380 0 0 2 13 131 636 12 88
1
--
To unsubscribe, e-mail: <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>
RE: Status of James load tests
Posted by Jason Webb <jw...@inovem.com>.
I'd be interested to know if anyone has run any profiling tools (NuMega
etc) on either Avalon etc. or on the James code itself.
We're buying the tools soon, so I might have a go, as we need to find
any long-term leaks before our customers do!
-- Jason
> -----Original Message-----
> From: Noel J. Bergman [mailto:noel@devtech.com]
> Sent: 16 October 2002 05:36
> To: James-Dev Mailing List
> Subject: Status of James load tests
>
>
> Here are the results of two load tests. Basically, they
> suffer from a steady degradation in performance due to memory
> loss. As you can see from vmstat, running every 30 seconds
> during the tests, there is a steady reduction in available
> memory, resulting in a steady shift from CPU working for the
> user to CPU working for the system. Nothing particularly
> useful in the logs right now, although I notice that both
> Harmeet's build and Peter's build use the new connection manager.
>
> The results of these tests are similar using either the
> Watchdog or the Scheduler. It may be that the smtphandler /
> connection manager cycle has issues, or it could be something
> else. In both cases, the sendMail call in the SMTPHandler is
> commented out, so that should be isolating our test to just
> the handler, and related classes. I'm going to make test
> with 1000s of messages per connection instead of 2, and see
> if it is messages or connections that cause the problem.
>
> Clearly, we want to get a handle on this issue.
>
> --- Noel
>
> --------------------------------------------------------------------
>
> $ ./postal -m 1 -p 20 -c 2 [server] userlist - | tee -a
> test.log time,messages,data(K),errors,connections,SSL connections
>
> Harmeet Build Peter Build
> 23:09,1671,1449,0,1129,0 22:22,1612,1422,0,1106,0
> 23:10,1965,1705,27,1381,0 22:23,1941,1682,9,1320,0
> 23:11,2113,1848,4,1431,0 22:24,1966,1691,1,1307,0
> 23:12,2143,1866,5,1426,0 22:25,2065,1795,0,1381,0
> 23:13,1954,1689,0,1292,0 22:26,1940,1688,0,1296,
> 23:14,1852,1626,0,1235,0 22:27,1844,1589,0,1229,0
> 23:15,1738,1489,0,1153,0 22:28,1739,1525,0,1155,0
> 23:16,1634,1428,0,1073,0 22:29,1621,1396,0,1078,0
> 23:17,1558,1354,2,1041,0 22:30,1541,1340,0,1013,0
> 23:18,1462,1271,0,974,0 22:31,1483,1292,0,983,0
> 23:19,1409,1211,0,940,0 22:32,1404,1217,0,936,0
> 23:20,1325,1149,0,903,0 22:33,1345,1151,0,899,
> 23:21,1296,1144,0,868,0 22:34,1317,1149,0,856,0
> 23:22,1223,1058,0,836,0 22:35,1258,1093,0,836,0
>
> --------------------------------------------------------------------
> [This particular set of numbers is with Harmeet's build]
> $ vmstat 30
> procs memory swap io system
> cpu
> r b w swpd free buff cache si so bi bo in
> cs us sy
> id
> 2 0 0 88652 107584 31728 29160 0 0 0 0 5
> 7 0 0
> 7
> 13 0 0 88652 100984 31728 29160 0 0 2 30 814
> 1329 86 14
> 0
> 12 0 0 88652 97332 31728 29160 0 0 2 33 856
> 1423 85 15
> 0
> 14 1 0 88652 93812 31728 29160 0 0 3 46 1008
> 1489 78 22
> 0
> 11 0 0 88652 88888 31728 29160 0 0 3 47 988
> 1496 74 26
> 0
> 16 0 0 88652 77652 31728 29160 0 0 3 45 987
> 1445 74 26
> 0
> 14 0 0 88652 75372 31728 29160 0 0 0 55 1020
> 1469 72 28
> 0
> 25 0 0 88652 71792 31728 29160 0 0 0 52 970
> 1443 68 32
> 0
> 25 0 0 88652 67392 31728 29160 0 0 2 45 917
> 1386 65 35
> 0
> 22 0 0 88652 62612 31728 29160 0 0 4 51 935
> 1317 62 38
> 0
> 3 0 0 88652 57788 31728 29160 0 0 3 49 883
> 1270 55 45
> 0
> 3 0 0 88652 52176 32760 29160 0 0 3 46 871
> 1253 54 46
> 0
> 9 0 0 88652 46356 34120 29160 0 0 3 47 861
> 1261 52 48
> 0
> 19 0 0 88652 40568 35408 29160 0 0 3 46 829
> 1215 50 50
> 0
> 5 0 0 88652 34984 36656 29160 0 0 3 44 789
> 1175 49 51
> 0
> 8 0 0 88652 29336 37960 29160 0 0 3 52 797
> 1173 47 53
> 0
> 16 0 0 88652 24108 39156 29160 0 0 3 43 776
> 1159 47 53
> 0
> 16 0 0 88652 19128 40332 29160 0 0 3 42 756
> 1162 46 54
> 0
> 19 0 0 88652 13760 41512 29160 0 0 3 41 736
> 1106 42 58
> 0
> 19 0 0 88652 9112 42632 29160 0 0 3 38 725
> 1083 42 58
> 0
> 18 0 0 88652 4220 43712 29160 0 0 3 39 699
> 1064 41 59
> 0
> 19 0 0 88652 7516 44400 21316 0 0 3 39 686
> 1050 40 60
> 0
> 12 0 0 88652 3152 45292 21204 0 0 3 44 695
> 1027 40 60
> 0
> 11 0 0 88796 7392 46048 21356 0 5 5 33 677
> 1025 38 62
> 0
> 11 0 0 88796 3104 46892 21284 0 0 2 36 653
> 987 36 64
> 0
> 9 0 0 90064 3032 45260 21552 0 42 2 51 736
> 970 38 62
> 0
> ---- test stopped here ----
> 21 0 0 90064 3028 42528 20648 0 0 2 33 635
> 965 36 64
> 0
> 27 0 0 90064 3036 40424 19784 0 0 3 42 639
> 941 36 64
> 0
> 8 0 0 90064 3076 37596 19380 0 0 2 30 581
> 908 34 66
> 0
> 5 0 0 90064 3244 37212 19380 0 0 0 17 134
> 607 14 86
> 0
> 12 0 0 90064 3116 37212 19380 0 0 1 15 132
> 643 11 89
> 0
> 2 0 0 90064 3232 36828 19380 0 0 2 11 126
> 637 9 91
> 1
> 12 0 0 90064 3116 36572 19380 0 0 1 11 125
> 633 11 89
> 0
> 12 0 0 90064 3184 36060 19380 0 0 2 13 131
> 636 12 88
> 1
>
>
> --
> To unsubscribe, e-mail:
> <mailto:james-dev-> unsubscribe@jakarta.apache.org>
> For
> additional commands,
> e-mail: <ma...@jakarta.apache.org>
>
>
--
To unsubscribe, e-mail: <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>