-
Notifications
You must be signed in to change notification settings - Fork 242
Description
I am seeing strange behaviour when running RMG-Py on a computer with huge amounts of shared memory. The execution stats in the log file and statistics.xls claim that after ~2 hours it had used only 912 MB of memory, when the queuing system killed the job because it had exceeded its limit of 32,000 MB.
My guess is that the memory has been deallocated by Python, but not returned to the operating system. See http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm for example. I think on this computer Python thinks there is plenty of spare memory (the computer has 4,194,304 MB) so to save time doesn't bother returning its free memory to the operating system, but then the queue manager kills it when it exceeds 32,000 MB.
Any ideas how to confirm if this is the case, and if so, fix it?
Edit: see also: