You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by GitBox <gi...@apache.org> on 2021/04/22 04:08:19 UTC

[GitHub] [couchdb] Pyifan opened a new issue #3522: Automatic compaction never triggered

Pyifan opened a new issue #3522:
URL: https://github.com/apache/couchdb/issues/3522


   Hi Experts,
   
   We are using couchdb 2.3.1 and we found automatic compaction isn't triggered as expected. According to documentation, db_fragmentation is computed as:
   
    (file_size - data_size) / file_size * 100
   
   And compaction shall be triggered when the ratio exceeds certain threshold. However in our usage, we observed that the data_size seems to follow the increase of file_size rather closely. The fragmentation ratio is never high enough even when a compact action could potential release a lot of disk space. Evidence is that after a manually triggered compaction, the db size dropped from 114G -> 49G (another time it dropped from ~600G -> ~50G). Below is the file_size and data_size we are getting from GET /{db}:
   
   {"db_name":"testplan","purge_seq":"0-g1AAAAH7eJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnymMBkgwHgNT____vZyUykGfAA4gB_8k2YAHEgP1kGJCkACST7CmxvQFi-3xybE8A2V5Pnu1JDiDN8eRpTmRIkofozAIAy-Kjpg","update_seq":"610385-g1AAAAITeJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnSmRIkv___39WEgOjVgKpmpMUgGSSPVS_4QWS9TuA9MdD9etLkaw_AaS_HqpfTYJU_XksQJKhAUgBjZgPMkOHkTwzFkDM2A8yQzmOPDMOQMy4DzJDdQN5ZjyAmAEOD80HWQBw36hU","sizes":{"file":**138221201430**,"external":123142485523,"active":123141079765},"other":{"data_size":**123142485523**},"doc_del_count":365243,"doc_count":169733,"disk_size":138221201430,"disk_format_version":7,"data_size":123141079765,"compact_running":false,"cluster":{"q":8,"n":1,"w":1,"r":1},"instance_start_time":"0"}
   
   {"db_name":"testplan","purge_seq":"0-g1AAAAH7eJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnymMBkgwPgNR_IMhKZCDVgKQEIJlUT57mRIYkefJ0Qty9AOLu_WQbcABiwH1yPK4A8rg9maHmANIcT4nfGyBOnw80IAsAg6ajpg","update_seq":"610397-g1AAAAITeJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnSmRIkv___39WEgOjVgKpmpMUgGSSPVS_4SWS9TuA9MdD9evLkqw_AaS_HqpfTYJU_XksQJKhAUgBjZgPMkOHkTwzFkDM2A8yQzmePDMOQMy4DzJDdRN5ZjyAmAEOD80nWQB5F6hg","sizes":{"file":**62651463702**,"external":60378495840,"active":60220917012},"other":{"data_size":**60378495840**},"doc_del_count":365243,"doc_count":169742,"disk_size":62651463702,"disk_format_version":7,"data_size":60220917012,"compact_running":true,"cluster":{"q":8,"n":1,"w":1,"r":1},"instance_start_time":"0"}
   
   I would expect that before the compaction, data_size should be also around the number of 60378495840 number, so that the computed fragmentation could reflect the disk space that could potentially be freed by compaction.
   
   Would that be a right expectation? Any suggestions please?
   
   Thanks!!
   
   ps, our compaction related config:
   
   
   compaction_daemon | check_interval | 3600 |  
   min_file_size | 131072 |  
   _default | [{db_fragmentation, "55%"}, {view_fragmentation, "60%"}, {from, "23:00"}, {to, "05:00"}]
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [couchdb] wohali closed issue #3522: Automatic compaction never triggered

Posted by GitBox <gi...@apache.org>.
wohali closed issue #3522:
URL: https://github.com/apache/couchdb/issues/3522


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org