The solution came yesterday night, I read in Perl Debugged that Unix processes only grow! On page 176 it says: Note that the amount of memory allocated by a Unix process never decreases before it terminates; whenever it frees memory it makes it available only for the same process to reuse again later. (It is not returned to the free pool for other processes to use.)
That is mostly true and a good way to start understanding it, but it is not completely true.
You can think of Linux as having a two-level memory allocation system: the kernel gives memory to the C library (via sbrk, mmap, etc), and then the C library gives it to the application (via malloc etc).
There is a little bit of slack in the C library: sometimes it will ask the OS for more than it needs at the moment, and it will not necessarily return freed memory. Instead, freed memory is hoarded because it will probably be needed again soon.
Above a certain high water mark the C allocator may return memory to the OS. I think there are some parameters that you can tune to control this behaviour but in general the defaults are fine.
And this explanation is a generalization too: some programs, particularly databases, request memory of their own using mmpa, independent of the C allocator.
In addition, some programs map files into memory, and if they release that mapping then the memory will be returned to the OS straight away.
Of course all this is only at the level of virtual memory. Normally we're interested in physical memory because it's more scarce. Even if the C library never returns memory to the kernel, the kernel may eventually page it out to disk and free up the physical memory for other uses.recentlog.html?thresh=3
The fourth talk was about raising exceptions in signal handlers in Python, and the problem this causes.
What an interesting problem!
If I remember correctly (and it's been a long time), the Java specification says something sensible about asynchronous exceptions. I suppose the Python people have read that.