You are viewing a plain text version of this content. The canonical link for it is here.
Posted to erlang@couchdb.apache.org by Dave Cottlehuber <dc...@jsonified.com> on 2012/11/12 16:23:48 UTC

starting on metrics

I'm a big fan of measuring stuff, so here's a branch that upgrades
couchdb_stats_collector to track stuff, including vm stats and
GET/POST/PUT requests etc. You need graphite installed for this to
work.

https://github.com/dch/couchdb/compare/metrics

http://i.imgur.com/qvGMA.png as you can see I've not got a lot of traffic.

then start "couchdb -i" and enter `application:start(estatsd).` when
you have a chance.

There are a few issues, suggestions welcomed:

- how should the application estatsd be started (or disabled) from couch?
- how should I pick up the config for graphite (port, server etc)?
- how does it work under load?
- I need to alias the non-vm counters so that you can see which
host/instance/db they come from
- any other interesting metrics? it's possible to split on
/db/_ddoc/... for example as well
This will likely require hacking lots of modules. Not so sure about
how to do that cleanly, suggestions welcomed!

etc.

If somebody has a development couch that gets a bit of traffic I'd
love to get this up & running with you.

Once the larger issues are worked out I'll push this to apache/couchdb.

Finally, I'd like to get it all working with riemann[1] which is an
order of magnitude cooler, but that's a fair bit more work and
dependent on some fast moving libraries. The erlang library for
riemann seems overly complex & has some bugs so that needs fixing
first.

A+
Dave

[1]: http://aphyr.github.com/riemann/index.html

Re: starting on metrics

Posted by Benoit Chesneau <bc...@gmail.com>.
On Thu, Nov 15, 2012 at 2:13 PM, Paul Davis <pa...@gmail.com> wrote:
> The idea here is good but I'm not at all a fan of the implementation. First
> off, no way should we be choosing a specific stats collection protocol.
> They're just too specific to a particular operations/infra configuration
> that anything we pick is going to be inadequate for a non trivial number of
> users.
>
> OTOH, I think it would be a very good idea to sit down and design the stats
> API to be pluggable. We already have two rough sides to the API (collection
> vs reporting). If we sat down and designed a collection API that would then
> talk to a configurable reporting API it'd allow for users to do a number of
> cool things with stats.
>
>

agree.

- benoît

Re: starting on metrics

Posted by Paul Davis <pa...@gmail.com>.
Wether or not the statsd/riemann/collectd plugins provide an HTTP interface
I think is up to the plugin author.


On Thu, Nov 15, 2012 at 10:58 PM, Paul Davis <pa...@gmail.com>wrote:

>
> Well, it would be nice to provide a clean internal API for storage,
>> then use that for the default HTTP plugin, yeah?
>>
>
> Not sure what you mean by storage here. I would say the first step is the
> API for collection which is just the "couch_stats:incr(Key)" type of
> discussion. The HTTP plugin would then just be a thing that provides an
> implementation for those functions and has an HTTP handler to report.For
> more complicated bits like statsd/riemann/collectd the plugin would just do
> what's necessary to forward on the collected metrics.
>

Re: starting on metrics

Posted by Jan Lehnardt <ja...@apache.org>.
Hey all,

I love this discussion, but note that this is erlang@ where we want to
get people comfortable hacking CouchDB internals, not dev@ where we make
it extra hard to get your patches in (which we don’t but you know what
I mean :)

If this is a training exercise for Dave that may or may not lead to stuff
we can merge into CouchDB proper, or use as a learning-protype or whatever,
we should focus on that instead of trying to figure out what the proper
way to do this if it were on track to land in CouchDB.

Cheers
Jan
-- 



On Nov 16, 2012, at 08:13 , Ben Anderson <be...@cloudant.com> wrote:

> On Thu, Nov 15, 2012 at 7:58 PM, Paul Davis <pa...@gmail.com> wrote:
>>> Well, it would be nice to provide a clean internal API for storage,
>>> then use that for the default HTTP plugin, yeah?
>>> 
>> 
>> Not sure what you mean by storage here. I would say the first step is the
>> API for collection which is just the "couch_stats:incr(Key)" type of
>> discussion. The HTTP plugin would then just be a thing that provides an
>> implementation for those functions and has an HTTP handler to report.For
>> more complicated bits like statsd/riemann/collectd the plugin would just do
>> what's necessary to forward on the collected metrics.
> 
> More specifically I mean a split between metrics insertion ("storage")
> and metrics retrieval. That would make it straightforward to enable
> two simultaneous retrieval interfaces (e.g., Collectd and HTTP), since
> they wouldn't include conflicting implementations of an insertion API.
> 
> Probably better just to write this than awkwardly debate the
> semantics. I'm sure you'll like it when it's done, Paul. ;)


Re: starting on metrics

Posted by Ben Anderson <be...@cloudant.com>.
On Thu, Nov 15, 2012 at 7:58 PM, Paul Davis <pa...@gmail.com> wrote:
>> Well, it would be nice to provide a clean internal API for storage,
>> then use that for the default HTTP plugin, yeah?
>>
>
> Not sure what you mean by storage here. I would say the first step is the
> API for collection which is just the "couch_stats:incr(Key)" type of
> discussion. The HTTP plugin would then just be a thing that provides an
> implementation for those functions and has an HTTP handler to report.For
> more complicated bits like statsd/riemann/collectd the plugin would just do
> what's necessary to forward on the collected metrics.

More specifically I mean a split between metrics insertion ("storage")
and metrics retrieval. That would make it straightforward to enable
two simultaneous retrieval interfaces (e.g., Collectd and HTTP), since
they wouldn't include conflicting implementations of an insertion API.

Probably better just to write this than awkwardly debate the
semantics. I'm sure you'll like it when it's done, Paul. ;)

Re: starting on metrics

Posted by Paul Davis <pa...@gmail.com>.
> Well, it would be nice to provide a clean internal API for storage,
> then use that for the default HTTP plugin, yeah?
>

Not sure what you mean by storage here. I would say the first step is the
API for collection which is just the "couch_stats:incr(Key)" type of
discussion. The HTTP plugin would then just be a thing that provides an
implementation for those functions and has an HTTP handler to report.For
more complicated bits like statsd/riemann/collectd the plugin would just do
what's necessary to forward on the collected metrics.

Re: starting on metrics

Posted by Ben Anderson <be...@cloudant.com>.
On Thu, Nov 15, 2012 at 4:49 PM, Paul Davis <pa...@gmail.com> wrote:
>> With regard to the rolling windows ("stats over past N seconds") idea
>> - it's definitely more complex on both the implementation and API
>> fronts, but I think it's worthwhile to keep around in some form. If
>> you toss it for fixed-windows - i.e., collect data for N seconds,
>> calculate your stats, then throw it away and start anew - you lose the
>> ability to take meaningful measurements at any point in time. This can
>> be misleading for pull-based requesters, such as humans. The API could
>> certainly be simplified. Perhaps the window size could be specified at
>> metric creation/specification time and returned along with the
>> response?
>>
>
> Sorry, I got distracted by my GF coming home in the middle of writing that
> email. I meant that I'd like to see a discussion about moving to a time
> slice based approach instead of the multiple-rolling-window approach. While
> generally in theory I agree with the comment about pull based I think
> that's only if we're naive. We could do something as simple as "every N
> seconds, calculate values for each metric, use those values for requests
> during the next time slice". Then we would get the constant values for N
> seconds. I think that sort of thing would be fairly obvious to humans
> clicking the refresh with little loss in precision as to what we have now.

Ah, that sounds like a good approach.

>
> Also, (assuming we go with a plugin type config) then really this
> discussion would just be specific to the default plugin we provide with
> couchdb that mimics what we do now with HTTP based publishing. Where if
> someone wanted to write a riemann push thing, it could be totally different.

Well, it would be nice to provide a clean internal API for storage,
then use that for the default HTTP plugin, yeah?

Re: starting on metrics

Posted by Paul Davis <pa...@gmail.com>.
> With regard to the rolling windows ("stats over past N seconds") idea
> - it's definitely more complex on both the implementation and API
> fronts, but I think it's worthwhile to keep around in some form. If
> you toss it for fixed-windows - i.e., collect data for N seconds,
> calculate your stats, then throw it away and start anew - you lose the
> ability to take meaningful measurements at any point in time. This can
> be misleading for pull-based requesters, such as humans. The API could
> certainly be simplified. Perhaps the window size could be specified at
> metric creation/specification time and returned along with the
> response?
>

Sorry, I got distracted by my GF coming home in the middle of writing that
email. I meant that I'd like to see a discussion about moving to a time
slice based approach instead of the multiple-rolling-window approach. While
generally in theory I agree with the comment about pull based I think
that's only if we're naive. We could do something as simple as "every N
seconds, calculate values for each metric, use those values for requests
during the next time slice". Then we would get the constant values for N
seconds. I think that sort of thing would be fairly obvious to humans
clicking the refresh with little loss in precision as to what we have now.

Also, (assuming we go with a plugin type config) then really this
discussion would just be specific to the default plugin we provide with
couchdb that mimics what we do now with HTTP based publishing. Where if
someone wanted to write a riemann push thing, it could be totally different.

Re: starting on metrics

Posted by Ben Anderson <be...@cloudant.com>.
I agree with Paul on the collection/reporting split and that we
shouldn't choose a protocol - that's one place where Folsom got things
right, splitting folsom_webmachine out of the main codebase.

I implemented the system we're using at Cloudant and made a lot of
those mistakes. We may open source it, but it's definitely not a good
candidate for a drop in to CouchDB. I'd love to work on something for
CouchDB, though, and get it right this time.

We could certainly get away with counters, gauges, and a distribution
metric (histogram?).

With regard to the rolling windows ("stats over past N seconds") idea
- it's definitely more complex on both the implementation and API
fronts, but I think it's worthwhile to keep around in some form. If
you toss it for fixed-windows - i.e., collect data for N seconds,
calculate your stats, then throw it away and start anew - you lose the
ability to take meaningful measurements at any point in time. This can
be misleading for pull-based requesters, such as humans. The API could
certainly be simplified. Perhaps the window size could be specified at
metric creation/specification time and returned along with the
response?

Again, I'd love to work on this with you, Dave. I'll give this some
thought tonight and see if I can come up with a good API proposal to
bounce off the list.

Cheers,
--
b

On Thu, Nov 15, 2012 at 2:45 PM, Paul Davis <pa...@gmail.com> wrote:
> Definitely good to make something work to play with. On a related note I
> think we need to seriously reevaluate some of the ways we use the config
> for these bits (granted, that's a future only tangentially related thing).
>
> As to your list of metrics, I think it depends on what you mean. The
> general types of stats that I'm aware of usually fit into a small number of
> categories:
>
> counters - generally speaking an atomically incrementing value (ie, open
> couchjs processes)
> gauges - record an absolute value (ie, CPU temperature)
> meters - record a rate of events (ie, HTTP requests)
> statsystuff - Slightly more complicated bits for recording stats on
> recorded values (ie, request latency with avg/stddev/min/max/percentiles)
>
> And I'd note that you can get away without some of these. Meters can be
> implemented with a counter and then using a derivative when graphing
> (Graphite does this with the nonNegativeDerivative function).
>
> (Didn't know where to put this, but the middle seems good) Also one thing
> we should look into is removing the time series based stats. Ie, the "stats
> over last, 1, 60, 300, seonds" stuff as it makes things quite difficult and
> AFAIK isn't really useful (especially if you forward to a metrics analysis
> system). This would save us significantly in CPU and complexity.
>
> If I were going to write this code I would start by taking a look at a few
> other libraries and then figuring out what we might need as an API within
> the code base. Right now I could see us getting away with just counters,
> gauges, and maybe a basic statsy kind.
>
> Once you have the API then its just a matter of figuring out how to specify
> an implementation. I'm not sure what you mean by a custom behavior in this
> particular instance. We could write a behavior for a stats processor that
> implements the metric types we decide on I guess. Its really not super
> duper important other than it provides some compile time checks (but it
> also requires figuring out code paths when you compile the module that
> implements the behavior (and given that this thing would see high traffic I
> would go without cause you'll see if you forgot to implement a function
> quite quickly)). The newer couch_index code does stuff kinda like this.
> Though its a lot more involved that you'd want to be. Also, more wild ideas
> in response to your efficiency questions.
>
> So I can actually think of a couple ways to do this efficiently that will
> limit the overhead for implementation. There a bit complex in terms of the
> hack, but would be relatively constrained in where the complexity lives.
> For the time being I would start with something like mochiglobal to
> efficiently decide if you need to record a metric. Although that's a bit
> restrictive in that it requires atoms as key names. I have a similar module
> I can open source that allows arbitrary keys at the expense of adding a
> function clause pattern match. Although if you want to get *really*
> awesomely crazy, a fun way to try doing this particular "implementation
> swap" would be to dynamically replace the implementation module at runtime
> (not as crazy as it sounds, but a bit still slightly crazy). CouchDB could
> ship with two versions of this module. One would be the current "expose
> values over HTTP" method and one could be a "no-op" that people who just
> wanted performance could use (nfc what the performance penalties are of the
> current style, though it has tipped nodes over before).
>
> Things to look at for thoughts:
>
> http://metrics.codahale.com/
> https://github.com/basho/folsom
> https://collectd.org/wiki/index.php/Data_source
>
>
>
> On Thu, Nov 15, 2012 at 4:35 PM, Dave Cottlehuber <dc...@jsonified.com> wrote:
>
>> On 15 November 2012 14:13, Paul Davis <pa...@gmail.com> wrote:
>> > The idea here is good but I'm not at all a fan of the implementation.
>> First
>> > off, no way should we be choosing a specific stats collection protocol.
>> > They're just too specific to a particular operations/infra configuration
>> > that anything we pick is going to be inadequate for a non trivial number
>> of
>> > users.
>>
>> Absolutely - but as a first go I am learning a lot :-)). First make it
>> work, then make it pretty?
>>
>> Yesterday I hacked in starting up estatsd and enabling/disabling this
>> via config file:
>>
>>
>> https://github.com/dch/couchdb/commit/e885e55ee91b77be41363c0fd76414036650dcaa
>>
>> It's hacky but it works, I think.
>>
>> > OTOH, I think it would be a very good idea to sit down and design the
>> stats
>> > API to be pluggable. We already have two rough sides to the API
>> (collection
>> > vs reporting). If we sat down and designed a collection API that would
>> then
>> > talk to a configurable reporting API it'd allow for users to do a number
>> of
>> > cool things with stats.
>>
>> Nice split.
>>
>> Re measuring "properly" we could get by with 3 "things":
>>
>> - counters (http reqs, # of active couchjs procs maybe)
>> - duration
>> - events (replication started, etc)
>>
>> And then plug into graphite, riemann, whatever take your fancy. Would
>> the best way to provide that API interface these counters be to write
>> a custom behaviour? Any existing code you can point to that does this
>> sort of thing?
>>
>> Last question, any tip on how to implement this in a way that you can
>> turn off metrics and avoid the performance hit completely, without
>> needing a recompile (e.g. to remove macros)?
>>
>> A+
>> Dave
>>

Re: starting on metrics

Posted by Paul Davis <pa...@gmail.com>.
Definitely good to make something work to play with. On a related note I
think we need to seriously reevaluate some of the ways we use the config
for these bits (granted, that's a future only tangentially related thing).

As to your list of metrics, I think it depends on what you mean. The
general types of stats that I'm aware of usually fit into a small number of
categories:

counters - generally speaking an atomically incrementing value (ie, open
couchjs processes)
gauges - record an absolute value (ie, CPU temperature)
meters - record a rate of events (ie, HTTP requests)
statsystuff - Slightly more complicated bits for recording stats on
recorded values (ie, request latency with avg/stddev/min/max/percentiles)

And I'd note that you can get away without some of these. Meters can be
implemented with a counter and then using a derivative when graphing
(Graphite does this with the nonNegativeDerivative function).

(Didn't know where to put this, but the middle seems good) Also one thing
we should look into is removing the time series based stats. Ie, the "stats
over last, 1, 60, 300, seonds" stuff as it makes things quite difficult and
AFAIK isn't really useful (especially if you forward to a metrics analysis
system). This would save us significantly in CPU and complexity.

If I were going to write this code I would start by taking a look at a few
other libraries and then figuring out what we might need as an API within
the code base. Right now I could see us getting away with just counters,
gauges, and maybe a basic statsy kind.

Once you have the API then its just a matter of figuring out how to specify
an implementation. I'm not sure what you mean by a custom behavior in this
particular instance. We could write a behavior for a stats processor that
implements the metric types we decide on I guess. Its really not super
duper important other than it provides some compile time checks (but it
also requires figuring out code paths when you compile the module that
implements the behavior (and given that this thing would see high traffic I
would go without cause you'll see if you forgot to implement a function
quite quickly)). The newer couch_index code does stuff kinda like this.
Though its a lot more involved that you'd want to be. Also, more wild ideas
in response to your efficiency questions.

So I can actually think of a couple ways to do this efficiently that will
limit the overhead for implementation. There a bit complex in terms of the
hack, but would be relatively constrained in where the complexity lives.
For the time being I would start with something like mochiglobal to
efficiently decide if you need to record a metric. Although that's a bit
restrictive in that it requires atoms as key names. I have a similar module
I can open source that allows arbitrary keys at the expense of adding a
function clause pattern match. Although if you want to get *really*
awesomely crazy, a fun way to try doing this particular "implementation
swap" would be to dynamically replace the implementation module at runtime
(not as crazy as it sounds, but a bit still slightly crazy). CouchDB could
ship with two versions of this module. One would be the current "expose
values over HTTP" method and one could be a "no-op" that people who just
wanted performance could use (nfc what the performance penalties are of the
current style, though it has tipped nodes over before).

Things to look at for thoughts:

http://metrics.codahale.com/
https://github.com/basho/folsom
https://collectd.org/wiki/index.php/Data_source



On Thu, Nov 15, 2012 at 4:35 PM, Dave Cottlehuber <dc...@jsonified.com> wrote:

> On 15 November 2012 14:13, Paul Davis <pa...@gmail.com> wrote:
> > The idea here is good but I'm not at all a fan of the implementation.
> First
> > off, no way should we be choosing a specific stats collection protocol.
> > They're just too specific to a particular operations/infra configuration
> > that anything we pick is going to be inadequate for a non trivial number
> of
> > users.
>
> Absolutely - but as a first go I am learning a lot :-)). First make it
> work, then make it pretty?
>
> Yesterday I hacked in starting up estatsd and enabling/disabling this
> via config file:
>
>
> https://github.com/dch/couchdb/commit/e885e55ee91b77be41363c0fd76414036650dcaa
>
> It's hacky but it works, I think.
>
> > OTOH, I think it would be a very good idea to sit down and design the
> stats
> > API to be pluggable. We already have two rough sides to the API
> (collection
> > vs reporting). If we sat down and designed a collection API that would
> then
> > talk to a configurable reporting API it'd allow for users to do a number
> of
> > cool things with stats.
>
> Nice split.
>
> Re measuring "properly" we could get by with 3 "things":
>
> - counters (http reqs, # of active couchjs procs maybe)
> - duration
> - events (replication started, etc)
>
> And then plug into graphite, riemann, whatever take your fancy. Would
> the best way to provide that API interface these counters be to write
> a custom behaviour? Any existing code you can point to that does this
> sort of thing?
>
> Last question, any tip on how to implement this in a way that you can
> turn off metrics and avoid the performance hit completely, without
> needing a recompile (e.g. to remove macros)?
>
> A+
> Dave
>

Re: starting on metrics

Posted by Dave Cottlehuber <dc...@jsonified.com>.
On 15 November 2012 14:13, Paul Davis <pa...@gmail.com> wrote:
> The idea here is good but I'm not at all a fan of the implementation. First
> off, no way should we be choosing a specific stats collection protocol.
> They're just too specific to a particular operations/infra configuration
> that anything we pick is going to be inadequate for a non trivial number of
> users.

Absolutely - but as a first go I am learning a lot :-)). First make it
work, then make it pretty?

Yesterday I hacked in starting up estatsd and enabling/disabling this
via config file:

https://github.com/dch/couchdb/commit/e885e55ee91b77be41363c0fd76414036650dcaa

It's hacky but it works, I think.

> OTOH, I think it would be a very good idea to sit down and design the stats
> API to be pluggable. We already have two rough sides to the API (collection
> vs reporting). If we sat down and designed a collection API that would then
> talk to a configurable reporting API it'd allow for users to do a number of
> cool things with stats.

Nice split.

Re measuring "properly" we could get by with 3 "things":

- counters (http reqs, # of active couchjs procs maybe)
- duration
- events (replication started, etc)

And then plug into graphite, riemann, whatever take your fancy. Would
the best way to provide that API interface these counters be to write
a custom behaviour? Any existing code you can point to that does this
sort of thing?

Last question, any tip on how to implement this in a way that you can
turn off metrics and avoid the performance hit completely, without
needing a recompile (e.g. to remove macros)?

A+
Dave

Re: starting on metrics

Posted by Paul Davis <pa...@gmail.com>.
The idea here is good but I'm not at all a fan of the implementation. First
off, no way should we be choosing a specific stats collection protocol.
They're just too specific to a particular operations/infra configuration
that anything we pick is going to be inadequate for a non trivial number of
users.

OTOH, I think it would be a very good idea to sit down and design the stats
API to be pluggable. We already have two rough sides to the API (collection
vs reporting). If we sat down and designed a collection API that would then
talk to a configurable reporting API it'd allow for users to do a number of
cool things with stats.


On Thu, Nov 15, 2012 at 7:13 AM, Jan Lehnardt <ja...@apache.org> wrote:

> Dave, this is excellent work and most of what I ever wanted from the
> stats module when we first wrote it :)
>
> Good luck with getting this done, I’d love to see it in CouchDB proper!
> Jan
> --
>
> On Nov 12, 2012, at 16:23 , Dave Cottlehuber <dc...@jsonified.com> wrote:
>
> > I'm a big fan of measuring stuff, so here's a branch that upgrades
> > couchdb_stats_collector to track stuff, including vm stats and
> > GET/POST/PUT requests etc. You need graphite installed for this to
> > work.
> >
> > https://github.com/dch/couchdb/compare/metrics
> >
> > http://i.imgur.com/qvGMA.png as you can see I've not got a lot of
> traffic.
> >
> > then start "couchdb -i" and enter `application:start(estatsd).` when
> > you have a chance.
> >
> > There are a few issues, suggestions welcomed:
> >
> > - how should the application estatsd be started (or disabled) from couch?
> > - how should I pick up the config for graphite (port, server etc)?
> > - how does it work under load?
> > - I need to alias the non-vm counters so that you can see which
> > host/instance/db they come from
> > - any other interesting metrics? it's possible to split on
> > /db/_ddoc/... for example as well
> > This will likely require hacking lots of modules. Not so sure about
> > how to do that cleanly, suggestions welcomed!
> >
> > etc.
> >
> > If somebody has a development couch that gets a bit of traffic I'd
> > love to get this up & running with you.
> >
> > Once the larger issues are worked out I'll push this to apache/couchdb.
> >
> > Finally, I'd like to get it all working with riemann[1] which is an
> > order of magnitude cooler, but that's a fair bit more work and
> > dependent on some fast moving libraries. The erlang library for
> > riemann seems overly complex & has some bugs so that needs fixing
> > first.
> >
> > A+
> > Dave
> >
> > [1]: http://aphyr.github.com/riemann/index.html
>
>

Re: starting on metrics

Posted by Jan Lehnardt <ja...@apache.org>.
Dave, this is excellent work and most of what I ever wanted from the
stats module when we first wrote it :)

Good luck with getting this done, I’d love to see it in CouchDB proper!
Jan
-- 

On Nov 12, 2012, at 16:23 , Dave Cottlehuber <dc...@jsonified.com> wrote:

> I'm a big fan of measuring stuff, so here's a branch that upgrades
> couchdb_stats_collector to track stuff, including vm stats and
> GET/POST/PUT requests etc. You need graphite installed for this to
> work.
> 
> https://github.com/dch/couchdb/compare/metrics
> 
> http://i.imgur.com/qvGMA.png as you can see I've not got a lot of traffic.
> 
> then start "couchdb -i" and enter `application:start(estatsd).` when
> you have a chance.
> 
> There are a few issues, suggestions welcomed:
> 
> - how should the application estatsd be started (or disabled) from couch?
> - how should I pick up the config for graphite (port, server etc)?
> - how does it work under load?
> - I need to alias the non-vm counters so that you can see which
> host/instance/db they come from
> - any other interesting metrics? it's possible to split on
> /db/_ddoc/... for example as well
> This will likely require hacking lots of modules. Not so sure about
> how to do that cleanly, suggestions welcomed!
> 
> etc.
> 
> If somebody has a development couch that gets a bit of traffic I'd
> love to get this up & running with you.
> 
> Once the larger issues are worked out I'll push this to apache/couchdb.
> 
> Finally, I'd like to get it all working with riemann[1] which is an
> order of magnitude cooler, but that's a fair bit more work and
> dependent on some fast moving libraries. The erlang library for
> riemann seems overly complex & has some bugs so that needs fixing
> first.
> 
> A+
> Dave
> 
> [1]: http://aphyr.github.com/riemann/index.html