You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Michael Parker <mi...@gmail.com> on 2012/03/31 23:42:07 UTC

Introducing Iron Cushion, a benchmark and load test for CouchDB

Hi all,

Last week I searched for CouchDB benchmark and load testing suites. I
only came up with three-year old blog posts containing one-off scripts
or tests for read performance using ab, and the Definitive Guide
itself (http://guide.couchdb.org/draft/performance.html) didn’t allude
to anything good.

So I went ahead and wrote one called Iron Cushion, which you can find
available at https://github.com/mgp/iron-cushion. It proceeds in two
steps: First, documents are bulk inserted using CouchDB's Bulk
Document API. Second, documents are individually created, read,
updated, and deleted with random ordering of operations using
CouchDB's Document API. Detailed statistics for both steps are printed
at the end.

You can specify the number of concurrent connections to the database,
a “schema” for documents inserted and updated, how many documents to
bulk insert, how many CRUD operations to perform, and lots more. More
information can be found on the GitHub page.

(Disclaimer: Absolutely no warranty, don’t accidentally bulk insert a
million documents into your production DB, etc.)

All sorts of feedback is welcome!

Regards,
Mike

Re: Introducing Iron Cushion, a benchmark and load test for CouchDB

Posted by Adam Kocoloski <ko...@apache.org>.
On Mar 31, 2012, at 2:42 PM, Michael Parker wrote:

> Hi all,
> 
> Last week I searched for CouchDB benchmark and load testing suites. I
> only came up with three-year old blog posts containing one-off scripts
> or tests for read performance using ab, and the Definitive Guide
> itself (http://guide.couchdb.org/draft/performance.html) didn’t allude
> to anything good.
> 
> So I went ahead and wrote one called Iron Cushion, which you can find
> available at https://github.com/mgp/iron-cushion. It proceeds in two
> steps: First, documents are bulk inserted using CouchDB's Bulk
> Document API. Second, documents are individually created, read,
> updated, and deleted with random ordering of operations using
> CouchDB's Document API. Detailed statistics for both steps are printed
> at the end.
> 
> You can specify the number of concurrent connections to the database,
> a “schema” for documents inserted and updated, how many documents to
> bulk insert, how many CRUD operations to perform, and lots more. More
> information can be found on the GitHub page.
> 
> (Disclaimer: Absolutely no warranty, don’t accidentally bulk insert a
> million documents into your production DB, etc.)
> 
> All sorts of feedback is welcome!
> 
> Regards,
> Mike

Wow Mike, this looks fantastic!  I especially like the json_document_schema_file bit.  Thanks for sharing,

Adam