You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Diogo Júnior <di...@fraunhofer.pt> on 2014/04/04 20:19:12 UTC

Optimizing Couchdb Performance

Hi,

I've been using couchdb since mid of last year and I'm currently experiencing some problematic performance issues.

I would like to know if you guys have a contact/email/person with experience in this field that might advise users with best practices regarding max number of databases, views, replicators, etc. My system is a little bit complex and I would like to have some support from the community improving the performance (in case you have availability, of course).

Thanks,
--
Eng.º Diogo Júnior
Researcher | R&D Department

Fraunhofer Portugal AICOS
Edifício Central Rua Alfredo Allen, 455/461
4200-135 Porto Portugal
How to find us
Phone: +351 22 0408 300
www: www.fraunhofer.pt


Re: Optimizing Couchdb Performance

Posted by Stanley Iriele <si...@gmail.com>.
Hey could you be a bit more specific about your problems? What is the
performance problem? What is your set up? What version of CouchDB are you
using?

This would be the place to fins all of things you mentioned BTW
On Apr 4, 2014 11:19 AM, "Diogo Júnior" <di...@fraunhofer.pt> wrote:

> Hi,
>
> I've been using couchdb since mid of last year and I'm currently
> experiencing some problematic performance issues.
>
> I would like to know if you guys have a contact/email/person with
> experience in this field that might advise users with best practices
> regarding max number of databases, views, replicators, etc. My system is a
> little bit complex and I would like to have some support from the community
> improving the performance (in case you have availability, of course).
>
> Thanks,
> --
> Eng.º Diogo Júnior
> Researcher | R&D Department
>
> Fraunhofer Portugal AICOS
> Edifício Central Rua Alfredo Allen, 455/461
> 4200-135 Porto Portugal
> How to find us
> Phone: +351 22 0408 300
> www: www.fraunhofer.pt
>
>

Re: Optimizing Couchdb Performance

Posted by Jean-Yves Moulin <jy...@baaz.fr>.
Hi,


On 16 Apr 2014, at 19:33 , Matt Quinn <ma...@mjquinn.ca> wrote:

> On Wed, Apr 16, 2014 at 03:10:56PM +0000, Diogo Júnior wrote:
>> btw, there is any tool that you use to monitor couchdb performance (on-the-fly tool with statistics being updated automatically) ?
> 
> I haven't used it, but one option I've come across for collecting the
> stats that CouchDB publishes is collectd[0], with the cURL-JSON
> plugin[1]. The docs for that plugin use CouchDB as an example.


We have a script usable by net-snmp to export couchdb stats trough snmp. Then we use cacti to do this kind of graph:

http://jym.eileo.net/couch_req.png
http://jym.eileo.net/couch_code.png
http://jym.eileo.net/couch_io.png
http://jym.eileo.net/couch_timer.png
http://jym.eileo.net/couch_clients.png

I can send the shell script and the cacti xml export. Contact me if interested.


Best,
jym

Re: Optimizing Couchdb Performance

Posted by Matt Quinn <ma...@mjquinn.ca>.
On Wed, Apr 16, 2014 at 03:10:56PM +0000, Diogo Júnior wrote:
> btw, there is any tool that you use to monitor couchdb performance (on-the-fly tool with statistics being updated automatically) ?

I haven't used it, but one option I've come across for collecting the
stats that CouchDB publishes is collectd[0], with the cURL-JSON
plugin[1]. The docs for that plugin use CouchDB as an example.

[0] https://collectd.org/
[1] https://collectd.org/wiki/index.php/Plugin:cURL-JSON

-Matt

RE: Optimizing Couchdb Performance

Posted by Diogo Júnior <di...@fraunhofer.pt>.
Thanks Andy,

I'm already using that!

Cheers,
--
Eng.º Diogo Júnior
Researcher | R&D Department

Fraunhofer Portugal AICOS
Edifício Central Rua Alfredo Allen, 455/461
4200-135 Porto Portugal
How to find us
Phone: +351 22 0408 300
www: www.fraunhofer.pt


________________________________________
From: Andy Wenk [andywenk@apache.org]
Sent: 13 May 2014 10:07
To: user@couchdb.apache.org
Subject: Re: Optimizing Couchdb Performance

Diogo,

for monitoring you should also check out
https://github.com/gws/munin-plugin-couchdb

Cheers

Andy


On 16 April 2014 17:10, Diogo Júnior <di...@fraunhofer.pt> wrote:

> Any other comments or considerations that the community might have
> regarding my current implementation status?
>
> btw, there is any tool that you use to monitor couchdb performance
> (on-the-fly tool with statistics being updated automatically) ?
>
> Thanks,
> --
> Eng.º Diogo Júnior
> Researcher | R&D Department
>
> Fraunhofer Portugal AICOS
> Edifício Central Rua Alfredo Allen, 455/461
> 4200-135 Porto Portugal
> How to find us
> Phone: +351 22 0408 300
> www: www.fraunhofer.pt
>
>
> ________________________________________
> From: Andy Dorman [adorman@ironicdesign.com]
> Sent: 08 April 2014 15:06
> To: user@couchdb.apache.org
> Subject: Re: Optimizing Couchdb Performance
>
> On 04/08/2014 05:33 AM, Diogo Júnior wrote:
> > Well, let's start with the following, I'm using a pattern that might not
> be scalable ( I would like to have your opinion on this).... I have a
> database per user ( in my case my user has a device and each device
> database is always synchronized with the correspondent server database).
> So, one db per user. Is it good or not? What might be the drawbacks?
>
> You can not do queries across databases.
>
> At one time I considered doing a separate database per user because I
> need to limit user access to just their info and that was the only way I
> could think of to do it without writing a server middle layer to control
> access.  Turned out that I need to write that middle layer for other
> reasons, so we switched back to a single db with a document per user.
> This allows us to easily do queries involving multiple users.
>
> FWIW SQL databases have the same issues of course...but in the SQL world
> you would never consider creating a new database (at least I would not)
> for each user.
>
> --
> Andy Dorman
>
>


--
Andy Wenk
Hamburg - Germany
RockIt!

GPG fingerprint: C044 8322 9E12 1483 4FEC 9452 B65D 6BE3 9ED3 9588

 https://people.apache.org/keys/committer/andywenk.asc

Re: Optimizing Couchdb Performance

Posted by Andy Wenk <an...@apache.org>.
Diogo,

for monitoring you should also check out
https://github.com/gws/munin-plugin-couchdb

Cheers

Andy


On 16 April 2014 17:10, Diogo Júnior <di...@fraunhofer.pt> wrote:

> Any other comments or considerations that the community might have
> regarding my current implementation status?
>
> btw, there is any tool that you use to monitor couchdb performance
> (on-the-fly tool with statistics being updated automatically) ?
>
> Thanks,
> --
> Eng.º Diogo Júnior
> Researcher | R&D Department
>
> Fraunhofer Portugal AICOS
> Edifício Central Rua Alfredo Allen, 455/461
> 4200-135 Porto Portugal
> How to find us
> Phone: +351 22 0408 300
> www: www.fraunhofer.pt
>
>
> ________________________________________
> From: Andy Dorman [adorman@ironicdesign.com]
> Sent: 08 April 2014 15:06
> To: user@couchdb.apache.org
> Subject: Re: Optimizing Couchdb Performance
>
> On 04/08/2014 05:33 AM, Diogo Júnior wrote:
> > Well, let's start with the following, I'm using a pattern that might not
> be scalable ( I would like to have your opinion on this).... I have a
> database per user ( in my case my user has a device and each device
> database is always synchronized with the correspondent server database).
> So, one db per user. Is it good or not? What might be the drawbacks?
>
> You can not do queries across databases.
>
> At one time I considered doing a separate database per user because I
> need to limit user access to just their info and that was the only way I
> could think of to do it without writing a server middle layer to control
> access.  Turned out that I need to write that middle layer for other
> reasons, so we switched back to a single db with a document per user.
> This allows us to easily do queries involving multiple users.
>
> FWIW SQL databases have the same issues of course...but in the SQL world
> you would never consider creating a new database (at least I would not)
> for each user.
>
> --
> Andy Dorman
>
>


-- 
Andy Wenk
Hamburg - Germany
RockIt!

GPG fingerprint: C044 8322 9E12 1483 4FEC 9452 B65D 6BE3 9ED3 9588

 https://people.apache.org/keys/committer/andywenk.asc

RE: Optimizing Couchdb Performance

Posted by Diogo Júnior <di...@fraunhofer.pt>.
Any other comments or considerations that the community might have regarding my current implementation status?

btw, there is any tool that you use to monitor couchdb performance (on-the-fly tool with statistics being updated automatically) ?

Thanks,
--
Eng.º Diogo Júnior
Researcher | R&D Department

Fraunhofer Portugal AICOS
Edifício Central Rua Alfredo Allen, 455/461
4200-135 Porto Portugal
How to find us
Phone: +351 22 0408 300
www: www.fraunhofer.pt


________________________________________
From: Andy Dorman [adorman@ironicdesign.com]
Sent: 08 April 2014 15:06
To: user@couchdb.apache.org
Subject: Re: Optimizing Couchdb Performance

On 04/08/2014 05:33 AM, Diogo Júnior wrote:
> Well, let's start with the following, I'm using a pattern that might not be scalable ( I would like to have your opinion on this).... I have a database per user ( in my case my user has a device and each device database is always synchronized with the correspondent server database). So, one db per user. Is it good or not? What might be the drawbacks?

You can not do queries across databases.

At one time I considered doing a separate database per user because I
need to limit user access to just their info and that was the only way I
could think of to do it without writing a server middle layer to control
access.  Turned out that I need to write that middle layer for other
reasons, so we switched back to a single db with a document per user.
This allows us to easily do queries involving multiple users.

FWIW SQL databases have the same issues of course...but in the SQL world
you would never consider creating a new database (at least I would not)
for each user.

--
Andy Dorman


Re: Optimizing Couchdb Performance

Posted by Andy Dorman <ad...@ironicdesign.com>.
On 04/08/2014 05:33 AM, Diogo Júnior wrote:
> Well, let's start with the following, I'm using a pattern that might not be scalable ( I would like to have your opinion on this).... I have a database per user ( in my case my user has a device and each device database is always synchronized with the correspondent server database). So, one db per user. Is it good or not? What might be the drawbacks?

You can not do queries across databases.

At one time I considered doing a separate database per user because I 
need to limit user access to just their info and that was the only way I 
could think of to do it without writing a server middle layer to control 
access.  Turned out that I need to write that middle layer for other 
reasons, so we switched back to a single db with a document per user. 
This allows us to easily do queries involving multiple users.

FWIW SQL databases have the same issues of course...but in the SQL world 
you would never consider creating a new database (at least I would not) 
for each user.

-- 
Andy Dorman


RE: Optimizing Couchdb Performance

Posted by Diogo Júnior <di...@fraunhofer.pt>.
I would appreciate any help from the couchdb community. 

Thanks.
--
Eng.º Diogo Júnior
Researcher | R&D Department

Fraunhofer Portugal AICOS
Edifício Central Rua Alfredo Allen, 455/461
4200-135 Porto Portugal
How to find us
Phone: +351 22 0408 300
www: www.fraunhofer.pt


________________________________________
From: Diogo Júnior [diogo.junior@fraunhofer.pt]
Sent: 04 April 2014 20:12
To: user@couchdb.apache.org
Subject: RE: Optimizing Couchdb Performance

Well, let's start with the following, I'm using a pattern that might not be scalable ( I would like to have your opinion on this).... I have a database per user ( in my case my user has a device and each device database is always synchronized with the correspondent server database). So, one db per user. Is it good or not? What might be the drawbacks?

Each device database might be "queried" by some supervisors and these supervisors are defined in a document that has the device identification and also the email's supervisor.
Each device synchronizes the database with the couchdb server db and then I have a continuous replicator that replicates these identification documents from all dbs to one special db called users_supervisors where that can be consulted to know what's the association between the supervisors and the devices.
This means that when the supervisor perform the login in the frontend I need to go to users_supervisors, query the documents that have the supervisor reference and then query the databases for the specific info that the supervisor might want to consult. So, in case the supervisor has only one device associated the frontend only goes to one device database and queries the data from that db, but in case the supervisor is related with 2 or 3 devices it's necessary to query all different databases.

More info:
- I'm using an rails skeleton app in the frontend to access couchdb data using couchrest and couchrestmodel.
- I have 16 design docs and an average of 3/4 views per design doc.
- All devices are constantly running continuous replications between the device and the couchdb server.
- I'm using couchdb 1.3.1 (sometimes it crashes and I'm not able to check what could have happened).
- In order to use HTTPS between the devices and the server I had to implement an apache reverse proxy (communication ciphered between the device and the proxy and then plaintext between the proxy and  couchdb). I know that 1.5.0 version already supports better https and certificates, etc.. Would I have benefits in not using the reverse proxy and by using just the couchdb with https? What could be the improvements?
- The synchronization between the devices and the couchdb server databases occurs normally and without problems.
- The problem is in the server because the response of the http queries are some times a little bit slow (it's probably because the views are being updated and some documents insertions occurred since last time view was indexed).
- I don't have yest a lot of databases(<200) and this number is supposed to grow up to thousands...

As I'm experiencing slow requests from couchdb without not too much databases, I would like to know  what are the best practices regarding number of views per database, compaction rules and how it might influence, documents' design guidelines depending on the number of updates, etc, etc, etc... And also what kind of plugins, tools I can use in order to evaluate the couchdb performance in a dashboard way with automatic refresh mechanism (disk being used, processor being used, views being updated/indexed, etc)

These are my stats in case it helps:
{
  "couchdb": {
    "open_databases": {
      "description": "number of open databases",
      "current": 137,
      "sum": 137,
      "mean": 0.002,
      "stddev": 0.109,
      "min": 0,
      "max": 10
    },
    "auth_cache_hits": {
      "description": "number of authentication cache hits",
      "current": 99277,
      "sum": 99277,
      "mean": 1.365,
      "stddev": 4.002,
      "min": 0,
      "max": 167
    },
    "auth_cache_misses": {
      "description": "number of authentication cache misses",
      "current": 49,
      "sum": 49,
      "mean": 0.001,
      "stddev": 0.026,
      "min": 0,
      "max": 2
    },
    "database_reads": {
      "description": "number of times a document was read from a database",
      "current": 8697252,
      "sum": 8697252,
      "mean": 119.524,
      "stddev": 447.12,
      "min": 0,
      "max": 11835
    },
    "database_writes": {
      "description": "number of times a database was changed",
      "current": 56102,
      "sum": 56102,
      "mean": 0.772,
      "stddev": 1.314,
      "min": 0,
      "max": 38
    },
    "request_time": {
      "description": "length of a request inside CouchDB without MochiWeb",
      "current": 340936083.926,
      "sum": 340936083.926,
      "mean": 10579.863,
      "stddev": 31149.99,
      "min": 1,
      "max": 745363
    },
    "open_os_files": {
      "description": "number of file descriptors CouchDB has open",
      "current": 2181,
      "sum": 2181,
      "mean": 0.03,
      "stddev": 0.618,
      "min": 0,
      "max": 39
    }
  },
  "httpd": {
    "requests": {
      "description": "number of HTTP requests",
      "current": 99026,
      "sum": 99026,
      "mean": 1.362,
      "stddev": 4.003,
      "min": 0,
      "max": 168
    },
    "bulk_requests": {
      "description": "number of bulk requests",
      "current": 16416,
      "sum": 16416,
      "mean": 0.226,
      "stddev": 0.603,
      "min": 0,
      "max": 20
    },
    "view_reads": {
      "description": "number of view reads",
      "current": 7760,
      "sum": 7760,
      "mean": 0.107,
      "stddev": 1.191,
      "min": 0,
      "max": 67
    },
    "clients_requesting_changes": {
      "description": "number of clients for continuous _changes",
      "current": 19,
      "sum": 19,
      "mean": 0,
      "stddev": 0.255,
      "min": -7,
      "max": 4
    },
    "temporary_view_reads": {
      "description": "number of temporary view reads",
      "current": 293,
      "sum": 293,
      "mean": 0.004,
      "stddev": 0.142,
      "min": 0,
      "max": 19
    }
  },
  "httpd_request_methods": {
    "DELETE": {
      "description": "number of HTTP DELETE requests",
      "current": 68,
      "sum": 68,
      "mean": 0.001,
      "stddev": 0.116,
      "min": 0,
      "max": 19
    },
    "HEAD": {
      "description": "number of HTTP HEAD requests",
      "current": 3786,
      "sum": 3786,
      "mean": 0.052,
      "stddev": 0.636,
      "min": 0,
      "max": 36
    },
    "POST": {
      "description": "number of HTTP POST requests",
      "current": 34155,
      "sum": 34155,
      "mean": 0.47,
      "stddev": 1.182,
      "min": 0,
      "max": 47
    },
    "PUT": {
      "description": "number of HTTP PUT requests",
      "current": 21014,
      "sum": 21014,
      "mean": 0.289,
      "stddev": 0.772,
      "min": 0,
      "max": 19
    },
    "GET": {
      "description": "number of HTTP GET requests",
      "current": 40105,
      "sum": 40105,
      "mean": 0.552,
      "stddev": 3.126,
      "min": 0,
      "max": 128
    },
    "COPY": {
      "description": "number of HTTP COPY requests",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    }
  },
  "httpd_status_codes": {
    "400": {
      "description": "number of HTTP 400 Bad Request responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "201": {
      "description": "number of HTTP 201 Created responses",
      "current": 32560,
      "sum": 32560,
      "mean": 0.448,
      "stddev": 0.827,
      "min": 0,
      "max": 20
    },
    "403": {
      "description": "number of HTTP 403 Forbidden responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "409": {
      "description": "number of HTTP 409 Conflict responses",
      "current": 4891,
      "sum": 4891,
      "mean": 0.067,
      "stddev": 0.494,
      "min": 0,
      "max": 18
    },
    "200": {
      "description": "number of HTTP 200 OK responses",
      "current": 60849,
      "sum": 60849,
      "mean": 0.837,
      "stddev": 3.735,
      "min": 0,
      "max": 167
    },
    "202": {
      "description": "number of HTTP 202 Accepted responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "404": {
      "description": "number of HTTP 404 Not Found responses",
      "current": 428,
      "sum": 428,
      "mean": 0.006,
      "stddev": 0.111,
      "min": 0,
      "max": 12
    },
    "301": {
      "description": "number of HTTP 301 Moved Permanently responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "405": {
      "description": "number of HTTP 405 Method Not Allowed responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "500": {
      "description": "number of HTTP 500 Internal Server Error responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "401": {
      "description": "number of HTTP 401 Unauthorized responses",
      "current": 107,
      "sum": 107,
      "mean": 0.003,
      "stddev": 0.226,
      "min": 0,
      "max": 23
    },
    "304": {
      "description": "number of HTTP 304 Not Modified responses",
      "current": 7,
      "sum": 7,
      "mean": 0,
      "stddev": 0.018,
      "min": 0,
      "max": 2
    },
    "412": {
      "description": "number of HTTP 412 Precondition Failed responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    }
  }
}


Thanks a lot!!
--
Eng.º Diogo Júnior


________________________________________
From: Alexander Shorin [kxepal@gmail.com]
Sent: 04 April 2014 19:28
To: user@couchdb.apache.org
Subject: Re: Optimizing Couchdb Performance

On Fri, Apr 4, 2014 at 10:19 PM, Diogo Júnior
<di...@fraunhofer.pt> wrote:
> I've been using couchdb since mid of last year and I'm currently experiencing some problematic performance issues.
>
> I would like to know if you guys have a contact/email/person with experience in this field that might advise users with best practices regarding max number of databases, views, replicators, etc. My system is a little bit complex and I would like to have some support from the community improving the performance (in case you have availability, of course).

Could you tell a little bit more about your situation: environment,
use cases, where you see performance issues. After that we could start
discussion about ways to optimize your couch (:


--
,,,^..^,,,

RE: Optimizing Couchdb Performance

Posted by Diogo Júnior <di...@fraunhofer.pt>.
Well, let's start with the following, I'm using a pattern that might not be scalable ( I would like to have your opinion on this).... I have a database per user ( in my case my user has a device and each device database is always synchronized with the correspondent server database). So, one db per user. Is it good or not? What might be the drawbacks?

Each device database might be "queried" by some supervisors and these supervisors are defined in a document that has the device identification and also the email's supervisor.
Each device synchronizes the database with the couchdb server db and then I have a continuous replicator that replicates these identification documents from all dbs to one special db called users_supervisors where that can be consulted to know what's the association between the supervisors and the devices.
This means that when the supervisor perform the login in the frontend I need to go to users_supervisors, query the documents that have the supervisor reference and then query the databases for the specific info that the supervisor might want to consult. So, in case the supervisor has only one device associated the frontend only goes to one device database and queries the data from that db, but in case the supervisor is related with 2 or 3 devices it's necessary to query all different databases.

More info: 
- I'm using an rails skeleton app in the frontend to access couchdb data using couchrest and couchrestmodel.
- I have 16 design docs and an average of 3/4 views per design doc.
- All devices are constantly running continuous replications between the device and the couchdb server.
- I'm using couchdb 1.3.1 (sometimes it crashes and I'm not able to check what could have happened).
- In order to use HTTPS between the devices and the server I had to implement an apache reverse proxy (communication ciphered between the device and the proxy and then plaintext between the proxy and  couchdb). I know that 1.5.0 version already supports better https and certificates, etc.. Would I have benefits in not using the reverse proxy and by using just the couchdb with https? What could be the improvements?
- The synchronization between the devices and the couchdb server databases occurs normally and without problems.
- The problem is in the server because the response of the http queries are some times a little bit slow (it's probably because the views are being updated and some documents insertions occurred since last time view was indexed).
- I don't have yest a lot of databases(<200) and this number is supposed to grow up to thousands... 

As I'm experiencing slow requests from couchdb without not too much databases, I would like to know  what are the best practices regarding number of views per database, compaction rules and how it might influence, documents' design guidelines depending on the number of updates, etc, etc, etc... And also what kind of plugins, tools I can use in order to evaluate the couchdb performance in a dashboard way with automatic refresh mechanism (disk being used, processor being used, views being updated/indexed, etc)

These are my stats in case it helps:
{
  "couchdb": {
    "open_databases": {
      "description": "number of open databases",
      "current": 137,
      "sum": 137,
      "mean": 0.002,
      "stddev": 0.109,
      "min": 0,
      "max": 10
    },
    "auth_cache_hits": {
      "description": "number of authentication cache hits",
      "current": 99277,
      "sum": 99277,
      "mean": 1.365,
      "stddev": 4.002,
      "min": 0,
      "max": 167
    },
    "auth_cache_misses": {
      "description": "number of authentication cache misses",
      "current": 49,
      "sum": 49,
      "mean": 0.001,
      "stddev": 0.026,
      "min": 0,
      "max": 2
    },
    "database_reads": {
      "description": "number of times a document was read from a database",
      "current": 8697252,
      "sum": 8697252,
      "mean": 119.524,
      "stddev": 447.12,
      "min": 0,
      "max": 11835
    },
    "database_writes": {
      "description": "number of times a database was changed",
      "current": 56102,
      "sum": 56102,
      "mean": 0.772,
      "stddev": 1.314,
      "min": 0,
      "max": 38
    },
    "request_time": {
      "description": "length of a request inside CouchDB without MochiWeb",
      "current": 340936083.926,
      "sum": 340936083.926,
      "mean": 10579.863,
      "stddev": 31149.99,
      "min": 1,
      "max": 745363
    },
    "open_os_files": {
      "description": "number of file descriptors CouchDB has open",
      "current": 2181,
      "sum": 2181,
      "mean": 0.03,
      "stddev": 0.618,
      "min": 0,
      "max": 39
    }
  },
  "httpd": {
    "requests": {
      "description": "number of HTTP requests",
      "current": 99026,
      "sum": 99026,
      "mean": 1.362,
      "stddev": 4.003,
      "min": 0,
      "max": 168
    },
    "bulk_requests": {
      "description": "number of bulk requests",
      "current": 16416,
      "sum": 16416,
      "mean": 0.226,
      "stddev": 0.603,
      "min": 0,
      "max": 20
    },
    "view_reads": {
      "description": "number of view reads",
      "current": 7760,
      "sum": 7760,
      "mean": 0.107,
      "stddev": 1.191,
      "min": 0,
      "max": 67
    },
    "clients_requesting_changes": {
      "description": "number of clients for continuous _changes",
      "current": 19,
      "sum": 19,
      "mean": 0,
      "stddev": 0.255,
      "min": -7,
      "max": 4
    },
    "temporary_view_reads": {
      "description": "number of temporary view reads",
      "current": 293,
      "sum": 293,
      "mean": 0.004,
      "stddev": 0.142,
      "min": 0,
      "max": 19
    }
  },
  "httpd_request_methods": {
    "DELETE": {
      "description": "number of HTTP DELETE requests",
      "current": 68,
      "sum": 68,
      "mean": 0.001,
      "stddev": 0.116,
      "min": 0,
      "max": 19
    },
    "HEAD": {
      "description": "number of HTTP HEAD requests",
      "current": 3786,
      "sum": 3786,
      "mean": 0.052,
      "stddev": 0.636,
      "min": 0,
      "max": 36
    },
    "POST": {
      "description": "number of HTTP POST requests",
      "current": 34155,
      "sum": 34155,
      "mean": 0.47,
      "stddev": 1.182,
      "min": 0,
      "max": 47
    },
    "PUT": {
      "description": "number of HTTP PUT requests",
      "current": 21014,
      "sum": 21014,
      "mean": 0.289,
      "stddev": 0.772,
      "min": 0,
      "max": 19
    },
    "GET": {
      "description": "number of HTTP GET requests",
      "current": 40105,
      "sum": 40105,
      "mean": 0.552,
      "stddev": 3.126,
      "min": 0,
      "max": 128
    },
    "COPY": {
      "description": "number of HTTP COPY requests",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    }
  },
  "httpd_status_codes": {
    "400": {
      "description": "number of HTTP 400 Bad Request responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "201": {
      "description": "number of HTTP 201 Created responses",
      "current": 32560,
      "sum": 32560,
      "mean": 0.448,
      "stddev": 0.827,
      "min": 0,
      "max": 20
    },
    "403": {
      "description": "number of HTTP 403 Forbidden responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "409": {
      "description": "number of HTTP 409 Conflict responses",
      "current": 4891,
      "sum": 4891,
      "mean": 0.067,
      "stddev": 0.494,
      "min": 0,
      "max": 18
    },
    "200": {
      "description": "number of HTTP 200 OK responses",
      "current": 60849,
      "sum": 60849,
      "mean": 0.837,
      "stddev": 3.735,
      "min": 0,
      "max": 167
    },
    "202": {
      "description": "number of HTTP 202 Accepted responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "404": {
      "description": "number of HTTP 404 Not Found responses",
      "current": 428,
      "sum": 428,
      "mean": 0.006,
      "stddev": 0.111,
      "min": 0,
      "max": 12
    },
    "301": {
      "description": "number of HTTP 301 Moved Permanently responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "405": {
      "description": "number of HTTP 405 Method Not Allowed responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "500": {
      "description": "number of HTTP 500 Internal Server Error responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    },
    "401": {
      "description": "number of HTTP 401 Unauthorized responses",
      "current": 107,
      "sum": 107,
      "mean": 0.003,
      "stddev": 0.226,
      "min": 0,
      "max": 23
    },
    "304": {
      "description": "number of HTTP 304 Not Modified responses",
      "current": 7,
      "sum": 7,
      "mean": 0,
      "stddev": 0.018,
      "min": 0,
      "max": 2
    },
    "412": {
      "description": "number of HTTP 412 Precondition Failed responses",
      "current": null,
      "sum": null,
      "mean": null,
      "stddev": null,
      "min": null,
      "max": null
    }
  }
}


Thanks a lot!!
--
Eng.º Diogo Júnior


________________________________________
From: Alexander Shorin [kxepal@gmail.com]
Sent: 04 April 2014 19:28
To: user@couchdb.apache.org
Subject: Re: Optimizing Couchdb Performance

On Fri, Apr 4, 2014 at 10:19 PM, Diogo Júnior
<di...@fraunhofer.pt> wrote:
> I've been using couchdb since mid of last year and I'm currently experiencing some problematic performance issues.
>
> I would like to know if you guys have a contact/email/person with experience in this field that might advise users with best practices regarding max number of databases, views, replicators, etc. My system is a little bit complex and I would like to have some support from the community improving the performance (in case you have availability, of course).

Could you tell a little bit more about your situation: environment,
use cases, where you see performance issues. After that we could start
discussion about ways to optimize your couch (:


--
,,,^..^,,,

Re: Optimizing Couchdb Performance

Posted by Alexander Shorin <kx...@gmail.com>.
On Fri, Apr 4, 2014 at 10:19 PM, Diogo Júnior
<di...@fraunhofer.pt> wrote:
> I've been using couchdb since mid of last year and I'm currently experiencing some problematic performance issues.
>
> I would like to know if you guys have a contact/email/person with experience in this field that might advise users with best practices regarding max number of databases, views, replicators, etc. My system is a little bit complex and I would like to have some support from the community improving the performance (in case you have availability, of course).

Could you tell a little bit more about your situation: environment,
use cases, where you see performance issues. After that we could start
discussion about ways to optimize your couch (:


--
,,,^..^,,,