Replies: 1 comment
-
Hi @Pyifan , we're not supporting 2.x any more, so I converted this to a discussion. Hopefully someone has the time to look at your case more closely. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Experts,
We are using couchdb 2.3.1 and we found automatic compaction isn't triggered as expected. According to documentation, db_fragmentation is computed as:
(file_size - data_size) / file_size * 100
And compaction shall be triggered when the ratio exceeds certain threshold. However in our usage, we observed that the data_size seems to follow the increase of file_size rather closely. The fragmentation ratio is never high enough even when a compact action could potential release a lot of disk space. Evidence is that after a manually triggered compaction, the db size dropped from 114G -> 49G (another time it dropped from ~600G -> ~50G). Below is the file_size and data_size we are getting from GET /{db}:
====BEFORE COMPACTION====
{"db_name":"testplan","purge_seq":"0-g1AAAAH7eJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnymMBkgwHgNT____vZyUykGfAA4gB_8k2YAHEgP1kGJCkACST7CmxvQFi-3xybE8A2V5Pnu1JDiDN8eRpTmRIkofozAIAy-Kjpg","update_seq":"610385-g1AAAAITeJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnSmRIkv___39WEgOjVgKpmpMUgGSSPVS_4QWS9TuA9MdD9etLkaw_AaS_HqpfTYJU_XksQJKhAUgBjZgPMkOHkTwzFkDM2A8yQzmOPDMOQMy4DzJDdQN5ZjyAmAEOD80HWQBw36hU","sizes":{"file":138221201430,"external":123142485523,"active":123141079765},"other":{"data_size":123142485523},"doc_del_count":365243,"doc_count":169733,"disk_size":138221201430,"disk_format_version":7,"data_size":123141079765,"compact_running":false,"cluster":{"q":8,"n":1,"w":1,"r":1},"instance_start_time":"0"}
====AFTER COMPACTION====
{"db_name":"testplan","purge_seq":"0-g1AAAAH7eJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnymMBkgwPgNR_IMhKZCDVgKQEIJlUT57mRIYkefJ0Qty9AOLu_WQbcABiwH1yPK4A8rg9maHmANIcT4nfGyBOnw80IAsAg6ajpg","update_seq":"610397-g1AAAAITeJzLYWBg4MhgTmFQS84vTc5ISXLILEssKDA0MrY0NjHQy0xMKUvMK0lMT9XLLdZLzs_NAapnSmRIkv___39WEgOjVgKpmpMUgGSSPVS_4SWS9TuA9MdD9evLkqw_AaS_HqpfTYJU_XksQJKhAUgBjZgPMkOHkTwzFkDM2A8yQzmePDMOQMy4DzJDdRN5ZjyAmAEOD80nWQB5F6hg","sizes":{"file":62651463702,"external":60378495840,"active":60220917012},"other":{"data_size":60378495840},"doc_del_count":365243,"doc_count":169742,"disk_size":62651463702,"disk_format_version":7,"data_size":60220917012,"compact_running":true,"cluster":{"q":8,"n":1,"w":1,"r":1},"instance_start_time":"0"}
I would expect that before the compaction, data_size should be also around the number of 60378495840, so that the computed fragmentation could reflect the disk space that could potentially be freed by compaction.
Would that be a right expectation? Any suggestions please?
Thanks!!
ps, our compaction related config:
compaction_daemon | check_interval | 3600 |
min_file_size | 131072 |
_default | [{db_fragmentation, "55%"}, {view_fragmentation, "60%"}, {from, "23:00"}, {to, "05:00"}]
Beta Was this translation helpful? Give feedback.
All reactions