You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@directory.apache.org by Emmanuel Lecharny <el...@apache.org> on 2007/05/07 20:30:28 UTC

[Profiling] What should we test ?

Hi,

I start this thread so that we can discuss how we profile the server, 
and what we should profile. The idea is to build a correct base for 
profiling sessions, so that we can evaluate many parts of the server.

In my mind, we should try to run those profiling sessions :
- adding N entries in the server, with indexed attributes
- adding N entries in the server, without any indexed attributes
- deleting N entries in the server, with indexed attributes
- deleting N entries in the server, without indexed attributes
- searching N entries in the server, with indexed attributes, and using 
those indexed attributes. The entries will be picked randomly. The cache 
should be bigger to the number of entry, so that each search never hit 
the disk
- searching N tims the same entry, using an indexed attribute
- doing N bind and N unbind requests

- we should also test the server without MINA. To test the server 
without MINA, it's enough to test an embedded server.

- to test MINA alone, we should also write a void LdapSearchHandler, 
returning the same result, which will be built, so that we don't pass 
through all the interceptors chain.

Any more suggestion ?

Emmanuel

Re: [Profiling] What should we test ?

Posted by Alex Karasulu <ak...@apache.org>.
Hi,

I have so many ideas on this topic.  I'll try to outline and organize them
here in a
coherent fashion.  Thanks Emmanuel for taking the initiative to kick off
this thread.

---------------------------------------------
Performance Testing Environment
---------------------------------------------

Testing Tool: SLAMD
============

We need to setup a SLAMD environment first where the load generators (load
injectors)
are running on separate machines to prevent context switching from
interfering with the
results.

We need to either reuse existing SLAMD tests or devise our own.  Regardless
we need
some kind of test panel that tests various operations and perhaps a
combination of
them with various configurations.  I'm thinking of having situations where
all entries are
in the cache/memory (preloaded) verses ones where the cache is disabled (set
to 1).

Tested Servers
=========

I would like to run the test panel against several OSS servers on the market
to get some
comparative figures.

ApacheDS 1.0
ApacheDS 1.5
ApacheDS x.y
OpenLDAP (current)
Fedora DS (current)
OpenDS (current)

It's very important here for us to establish a baseline for ADS and to do
comparative
benchmarks on the same hardware and operating environment.

----------------------------------------
Different Kinds of Benchmarks
----------------------------------------

Macro Performance Tests
===============

We obviously want to test ADS as a standalone server to collect basic
metrics.  We
will no doubt test the following operations with specific controls (not LDAP
controls
but experimental controls).

Add
Del
Modify
Search
Bind

Some of the tests that come out of the box with SLAMD are usable but one
must
understand that they perform more than one kind of operation at a time and
may
result in cross reactivity while trying to measure the characteristics of a
single
operation.  So while we can use these tests we must also create our own
which
isolate a specific operation.

While performing these tests we can extract more information while varying
different
parameters.  For example we can vary the following:

cache
indices
jvm
partitions
op specific parameters

For operational specific parameters search perhaps allows for the most
variables.  We
need to test the server when different scopes are used and with varying
result sets.


Micro Performance Tests
===============

Besides testing the performance of the server with each operation I would
like to configure
ApacheDS with some modified versions of the operation handlers (in protocol
provider),
interceptors and the default backend.  Basically these analogs will be
instrumented versions
of their standard counterparts.  For example you might have a
TestSearchHandler, and a
set of MetricsInterceptors along with a TestJdbmBackend.  The idea is to
start collecting statistics
on each operation at various levels in the server.  Then we can setup a
server.xml file to use
these TestXXX components instead of the default components to collect micro
benchmarks
while saturating the server.

Another way in which we can enable these micro metrics is by designing them
into the
components and switching off the feature when not testing.  A configuration
parameter
can be used to dynamically enable/disable these instrumentation features.


Capacity Performance Tests
=================

These tests will be critical for tuning the partition implementation and
devising
better heuristics for it.  Also we can build new partition implementations
and
test them to compare their performance characteristics.

We also need to test various operations as we scale the size of a partition
to detect
identify and fix performance issues that arise with increased capacity.

We could have various snapshots of the server each with different
capacities.  When
capacity tests are to be conducted we can clone the snapshot and use it for
the tests.

Alex

On 5/7/07, Emmanuel Lecharny <el...@apache.org> wrote:
>
> Hi,
>
> I start this thread so that we can discuss how we profile the server,
> and what we should profile. The idea is to build a correct base for
> profiling sessions, so that we can evaluate many parts of the server.
>
> In my mind, we should try to run those profiling sessions :
> - adding N entries in the server, with indexed attributes
> - adding N entries in the server, without any indexed attributes
> - deleting N entries in the server, with indexed attributes
> - deleting N entries in the server, without indexed attributes
> - searching N entries in the server, with indexed attributes, and using
> those indexed attributes. The entries will be picked randomly. The cache
> should be bigger to the number of entry, so that each search never hit
> the disk
> - searching N tims the same entry, using an indexed attribute
> - doing N bind and N unbind requests
>
> - we should also test the server without MINA. To test the server
> without MINA, it's enough to test an embedded server.
>
> - to test MINA alone, we should also write a void LdapSearchHandler,
> returning the same result, which will be built, so that we don't pass
> through all the interceptors chain.
>
> Any more suggestion ?
>
> Emmanuel
>