I have a python program and it was getting killed due to an out of memory error. I was using some deep recursion so i decided to use gc.collect() at the beginning to the method.
This solved the problem but and the program is no longer killed. Does anyone know why I had to call garbage collector and it wasn't automatically taken care of?
Now I worry that this is slowing down my program and I wonder is there a way to configure how/how often garbage collection is complete from outside my application? (without calling gc.collect() from the code)
PS. this was not a problem on my macos, only when I deployed the code to a linux VM on GCP
Related
I have a python script that works fine on my main computer without problems. But when I uploaded it to the Ubuntu server it started crashing. I thought for a long time what the problem was and looked at the system logs. It turned out that ubuntu automatically forcibly terminates the script due to lack of memory (server configuration is 512 MB of RAM), how can I debug the program on the consumed memory in different work options?
Have a look at something like Guppy3, which includes heapy, a 'heap analysis toolset' that can help you find where the memory's being used/held. Some links to information on how to use it are in the project's README.
If you have a core, consider using https://github.com/vmware/chap, which will allow you to look at both python and native allocations.
Once you have opened the core, probably "summarize used" is a good place to start.
I have a python script that reads files in multiple threads and saves the content to a DB.
This works fine in windows and stays at around 500M Memory Usage.
However the same program builds up memory usage to the maximum available value (14GB) and basically kills the machine.
Could this be a garbage collection problem?
So I recently SSHed into my linode Django server and whenever I try to do anything it ends up getting killed for running out of memory. After a little exploration I found this little gem (the output of top):
It seems I somehow hove many instance of apache and python open all at once eating up all my memory. Is this normal behavior for Django? How could this have happened? What do I do to fix my memory shortage?
I am running some tests nightly on a VM with a centos operating system. Recently the tests have been taking up all the memory available and nearly all the swap memory on the machine, I assigned the VM twice as much memory and it's still happening, which results in the physical host machine of the VM dying. These tests were previously running without needing half as much memory so I need to use some form of python memory analyzer to investigate what is going on.
I've looked at Pysizer and Heapy -- but after research Dowser seems to be the one I'm after as it requires zero changes to code.
So far from the documentation and googling I've got this code in it's own class:
import cherrypy
import dowser
class MemoryAnalyzer:
def memoryCheck(self):
cherrypy.config.update({'server.socket_port':8080})
cherrypy.tree.mount(dowser.Root())
cherrypy.engine.start()
I was hoping this would bring up the web interface shown in the documentation to track all instance of python running on the host, which doesn't work. I was confused by the documentation:
'python dowser __init__.py'.
Is it possible to just run this? I get the error :
/usr/bin/python: can't find '__main__.py' in 'dowser'
Can dowser run independently from my test suite on the VM? Or will I have to implement this above code into my main class to run my tests to trace instances of python?
Dowser is meant to be run as part of your application. Therefore, wherever you initialize the application, add the lines
import dowser
cherrypy.tree.mount(dowser.Root(), '/dowser')
Then you can browse to http://localhost:8080/dowser to view the dowser interface.
Note that the invocation you quoted from the documentation is for testing dowser. The correct invocation for that is python dowser/__init__.py.
Managed to get dowser to work using this blog http://www.aminus.org/blogs/index.php/2008/06/11/tracking-memory-leaks-with-dowser?blog=2 and changing the port to 8088 instead of 8080(which wasn't in use on the machine but still doesn't work!)
I am new to python and struggling to find how to control the amount of memory a python process can take? I am running python on a Cento OS machine with more than 2 GB of main memory size. Python is taking up only 128mb of this and I want to allocate it more. I tried to search all over the internet on this for last half an hour and found absolutely nothing! Why is it so difficult to find information on python related stuff :(
I would be happy if someone could throw some light on how to configure python for various things like allowed memory size, number of threads etc.
A link to a site where most controllable parameters of python are described would be appreciated well.
Forget all that, python just allocates more memory as needed, there is not a myriad of comandline arguments for the VM as in java, just let it run. For all comandline switches you can just run python -h or read man python.
Are you sure that the machine does not have a 128M process limit? If you are running the python script as a CGI inside a web server, it is quite likely that there is a process limit set - you will need to look at the web server configuration.