You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by GitBox <gi...@apache.org> on 2018/04/17 20:36:45 UTC

[GitHub] wohali opened a new issue #1286: Memory leak with replications of DBs with larger docs/attachments

wohali opened a new issue #1286: Memory leak with replications of DBs with larger docs/attachments
URL: https://github.com/apache/couchdb/issues/1286
 
 
   Well, we thought we'd fixed #745 , but we're still seeing a memory leak in production on replication of databases with large-ish documents (with attachments, docs are ~50-150 MB in size.)
   
   Review of running processes on a machine running out of RAM showed ~17k PIDs hung in ssl negotiation of some sort.
   
   A workaround was applied by adding `-ssl session_lifetime 300` to `etc/vm.args` and this stopped the massive PID leak, but memory usage continues to increase.
   
   At the same time, `max_http_request_size` was bumped sufficiently to handle the biggest doc + attachments + some overhead to ensure no problems from 413s.
   
   After the workarounds were applied, replications from the production (`2.1.1`) cluster to the test (`master`) cluster shows no increase in RAM, but only when documents were not being requested (i.e., continuous replications on DBs with no activity).
   
   As soon as documents needed replicating, memory usage starts to climb. A test script was written to one-shot replicate 4 databases from production to test, nuke the DBs, recreate them as empty, and repeat. The test cluster (target) is acting as the replicator, and `/_replicator` documents are being used (as opposed to `/_replicate`.) This approach steadily leaks memory in `beam.smp` on the test cluster.
   
   ## Current vs. Expected Behaviour
   CouchDB memory usage should be relatively flat. Instead, with the test case above, memory usage monotonically increases at a rate of ~250MB / 8h:
   
   ![memory-leak-1](https://user-images.githubusercontent.com/112292/38895098-15ab6208-425d-11e8-8f8e-83f344ed4f07.png)
   
   Forcing GC doesn't reduce the memory used.
   
   ## Steps to Reproduce (for bugs)
   See above - create a few databases with biggish docs/attachments on one server. Use another server to act both as the target and the replicator, running `master`. The target will experience the memory leak.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services