Find out Python interpreter memory consumption - python

How can I find out Python interpreter memory consumption?
I tried memory-profiler but it shows running code memory consumption.
PS. It seems tracemalloc does what I want

Related

Can python implement program-level virtual memory?

Recently, I wrote a python program, which requires a lot of memory. Then the computer memory is not enough, and it explodes.
It is known that the operating system will use part of the hard disk as virtual memory,which could solve the problem of insufficient memory. If you change the virtual memory of the operating system, you can solve the problem of insufficient memory in python programs, but the scope of impact is too wide.
Can python implement program-level virtual memory? That is, when the memory is insufficient, the hard disk is mapped to the program memory.
I need to run python program with large memory consumption.
Using disk space as memory is usually called swapping.
It is usually simpler to do it yourself than making a script to do it for you. But if you insist on your script doing it for you, then a way is just to execute the commands you would use to do it manually.
Here is a tutorial for how to add swap to a Linux system (first result on google) : https://linuxize.com/post/create-a-linux-swap-file/
Take each command in that tutorial, run them using subprocess, and you will get the desired result.
If you are on Windows (which you did not tell) then the method applies (but I could not find quickly an easy way to do it with commands).

python3 -- RAM usage (htop and tracemalloc give different values)

I have a code where I need to save RAM usage so I've been tracing RAM usage through tracemalloc.get_traced_memory. However, I have found that what tracemalloc.get_traced_memory gives is very different from the RAM usage I see through htop. In particular, the usage appearing in htop is more than twice than the usage returned by tracemalloc.get_traced_memory[1] (which is supposed to return the peak value). I wonder why this is happening, and what would be a more accurate way to trace RAM usage other than tracemalloc.get_traced_memory?
Ubuntu 20.04.4 LTS
python version: 3.7.15

Limiting the memory consumption of Python on Windows

Is there a way to limit the memory consumption of Python on Windows?
I'm running some experiments with TensorFlow 1.3.0, Python 3.6.3, Windows 7, all 64-bit. Problem: even a very small error in a TensorFlow program can cause runaway memory consumption. Unfortunately, Windows tries to use virtual memory to keep the program running, with the result that it goes so deep into disk thrashing that the machine locks up for several minutes.
I'd like to solve the problem by switching to 32-bit Python, but it seems TensorFlow only supports 64-bit.
Does any element of the stack provide a way to limit the memory consumption? Of course I'm not expecting a buggy program to work; what I want it to do once memory consumption exceeds a certain threshold is crash, so it doesn't go into disk thrashing and lock up the machine.

Find memory leaks when extending python with C

I wrote some C code to create a python module. I wrote the code myself (did not use SWIG etc). If you're interested, the C code is at the bottom of this thread.
Q: Is there any way to (hopefully, easily) find whether my C code has memory leaks? Is there any way to use python's awesomeness for finding memory leaks in its extensions?
if you using a linux environment we can easily find the memory leaks by using debugger named valgrind.
To get the valgrind first you have to install it from the internet by using command
sudo apt-get valgrind
after the installation is completed compile your c code using this debugger you can easily find the memory leaks. This debugger shows what is the reason for the memory leak and also specifies the line at which the leak has been occured.

How to check for memory leaks in Guile extension modules?

I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example:
http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
If valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging

Categories

Resources