-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Description
When log level is set to debug
, there exists at least one view with "lengthy" emissions (many characters, for example uuids) with a reduce function, and there is a constant inflow of documents, CouchDb will gradually increase RAM usage, until it runs out of memory.
Steps to Reproduce
This can be replicated on latest version of CouchDb (I just launched a local instance in a docker container) in single node.
In this gist https://gist.github.com/dianabarsan/408ed8b08bb653a6d16c04ba2126556b, there is a script that:
- sets log level to debug
- creates a database with one ddoc with one reduce function
- pushes 1 million documents that would be indexed
- triggers view indexes
Running this script against a CouchDb in a docker container (compose.yml also provided in the gist) should trigger gradually increasing RAM usage.
Expected Behaviour
Memory should be released, even when log level is debug.
Your Environment
Production server: docker image of CouchDb v. 3.3.3 launched in Kubernetes cluster hosted on AWS
Local test server: docker image of CouchDb 3.4.2, launched in docker container on Manjaro Linux.
Additional Context
In a three node CouchDb cluster, we have observed one node gradually increasing RAM usage compared to the other two nodes - up to 200GB vs 3GB. The k8s node would eventually run out of RAM and be expelled.
Our deployment receives a rather constant flow of new documents.
The only difference between the cluster nodes was that the runaway RAM node had the log level set to debug
.
Upon further investigation, it turned out that having reduce functions (in our case _stats) for views that have "lengthy" emissions was triggering the high RAM usage, presumably due to accumulating large debug logs.
After reverting the log level to info
, this behavior disappeared and RAM usage became consistent within the CouchDb cluster.
For my local deployment, I have noticed that RAM does eventually get released gradually AFTER there are no more writes. In our production environment this was never the case.