-
Hi experts, We are experimenting a cross-region couchdb cluster (as/eu/na, each has 1 node), q=1, n=3. When we upload a doc with 100 attachments, 1Mb each, we have observed that uploading performance degrade over time and also timeout errors:
[notice] 2021-03-17T03:57:46.301780Z couchdb@as_host <0.2337.0> 2952565779 127.0.0.1:8430 127.0.0.1 user_name PUT /db_name/a7c98cc1190641cf909ffd429e3ecd6d/1615953464-3b71f43ff30f4b15b5cd85dd9e95ebc7e84eb5a3-1048576.0256798_26?r=1&rev=1-8070b5f2861ac36a1ee1dab43617cc9d&w=1 201 ok 388 [notice] 2021-03-17T03:57:47.250710Z couchdb@as_host <0.2340.0> 5c8a9f68a1 127.0.0.1:8430 127.0.0.1 user_name PUT /db_name/a7c98cc1190641cf909ffd429e3ecd6d/1615953464-3b71f43ff30f4b15b5cd85dd9e95ebc7e84eb5a3-1048576.0317328_29?r=1&rev=2-6422d537c271d8bdfb224a9a0d53a5d2&w=1 201 ok 904
[notice] 2021-03-17T04:04:25.690079Z couchdb@as_host <0.10751.0> add2041592 127.0.0.1:8430 127.0.0.1 user_name PUT /db_name/a7c98cc1190641cf909ffd429e3ecd6d/1615953464-3b71f43ff30f4b15b5cd85dd9e95ebc7e84eb5a3-1048576.5915527_19?r=1&rev=100-34654069f2165a3cbb710a312807e6c5&w=1 201 ok 7432 [notice] 2021-03-17T04:04:34.405801Z couchdb@as_host <0.10906.0> 9744473add 127.0.0.1:8430 127.0.0.1 user_name PUT /db_name/a7c98cc1190641cf909ffd429e3ecd6d/1615953464-3b71f43ff30f4b15b5cd85dd9e95ebc7e84eb5a3-1048576.5966082_47?r=1&rev=101-8a324276ea90a673ad1255cca66d60a1&w=1 201 ok 8669
[error] 2021-03-17T04:06:35.557647Z couchdb@as_host emulator -------- Error in process <0.814.0> on node 'as_host' with exit value: [warning] 2021-03-17T04:06:35.557956Z couchdb@as_host <0.291.0> -------- mem3_sync shards/00000000-ffffffff/db_name.1615868214 couchdb@eu_host {timeout,[{mem3_rpc,rexi_call,2,[{file,[115,114,99,47,109,101,109,51,95,114,112,99,46,101,114,108]},{line,351}]},{mem3_rep,save_on_target,3,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,477}]},{mem3_rep,'-replicate_batch/1-fun-0-',4,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,416}]},{lists,map,2,[{file,[108,105,115,116,115,46,101,114,108]},{line,1239}]},{mem3_rep,replicate_batch,1,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,414}]},{mem3_rep,'-push_changes/1-fun-0-',3,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,337}]},{mem3_rep,with_src_db,2,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,558}]},{mem3_rep,repl,1,[{file,[115,114,99,47,109,101,109,51,95,114,101,112,46,101,114,108]},{line,227}]}]} It seems the cluster is not replicating fast enough, and in turn is slowing down uploading. Is this expected behavior? Or is there any configuration switch we could tune to improve performance? Thank you very much! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Don't do this. Seriously. Use standard CouchDB replication in this scenario. You want all nodes in a CouchDB 2.x/3.x cluster to be within a few milliseconds of each other. So, multiple buildings in the same data centre or metropolitan area is fine. Your scenario is not, and is explicitly not supported by the CouchDB development team. |
Beta Was this translation helpful? Give feedback.
Don't do this. Seriously. Use standard CouchDB replication in this scenario.
You want all nodes in a CouchDB 2.x/3.x cluster to be within a few milliseconds of each other. So, multiple buildings in the same data centre or metropolitan area is fine. Your scenario is not, and is explicitly not supported by the CouchDB development team.