Replies: 2 comments
-
Since this about the general behavior of DuckDB, not specific to the Node Neo client, I think the discussion you opened in the main DuckDB repo is a better place for this question. (I don't personally know the detailed answer to your question, but I you might try the CHECKPOINT command. See https://duckdb.org/docs/stable/sql/statements/delete.html#limitations-on-reclaiming-memory-and-disk-space,) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thank you for the suggestion, @jraymakers ! But I might have found something. It looks like it's related to compacting. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
I am noticing a difference in the DB file size depending on how I write it to disk.
For example, if I crunch my data in memory and then write it to disk, the file is relatively small.
However, if I instantiate with a file from the beginning, the result is bigger.
How can I make sure the final file is the same in both cases? Is it because of some compression option? Or maybe I need to do something to ensure the persistent file is really in sync with the last operations?
Thank you in advance for your help. 🙏
Beta Was this translation helpful? Give feedback.
All reactions