memory use for objects in python
March 18, 2008
Playing with various items to see how they impacted yum memory use today and we came up with a few interesting items:
Here’s the command we were running with more or less the same number of pkgs installed:
yum –disablerepo=’*’ –enablerepo=’development’ update
on an i686 box the memory use during the depsolve stage was 112M
on an x86_64 box the memory use during the depsolve stage was 409M
Now the number of pkgs available on the x86_64 box was more b/c x86_64 repos have about 25% more packages. But 25% does not account for a spike in memory use of 4x.
After a fair bit of messing around and eliminating variables I found this:
“Changed in 2.5: The number of extra bytes allocated is 4*sizeof(size_t). Before it was 16 on all boxes, reflecting that Python couldn’t make use of allocations >= 2**32 bytes even on 64-bit boxes before 2.5.”
From what I can tell, we end up with a 2x to 3x bump on x86_64.
This makes me sad.
Anyone know of a way to beat/mitigate this other than ‘use fewer python objects’?