Have a question that I can’t seem to find the answer to. So I have been using a raspberry pi to automate some scripts for data pulling in SQL databases. One issue that came up a few times was my python process gets killed which from the logs it looks like it’s due to insufficient RAM. This is from a raspberry pi 3B+, so only 1gb of ram. My question then is, is there a difference between say running it on a 1gb OSX system? Is there like better RAM management like writing swap files to the hard drive that another operating system/CPU architecture would have in this scenario? Or is it being a python process that the operating system cannot affect directly like that?
Note: this is really just for my understanding of how these factors work. I am pretty sure writing the code to process the data in chunks would work as a workaround to the RPi.
Related
I have a python script that works fine on my main computer without problems. But when I uploaded it to the Ubuntu server it started crashing. I thought for a long time what the problem was and looked at the system logs. It turned out that ubuntu automatically forcibly terminates the script due to lack of memory (server configuration is 512 MB of RAM), how can I debug the program on the consumed memory in different work options?
Have a look at something like Guppy3, which includes heapy, a 'heap analysis toolset' that can help you find where the memory's being used/held. Some links to information on how to use it are in the project's README.
If you have a core, consider using https://github.com/vmware/chap, which will allow you to look at both python and native allocations.
Once you have opened the core, probably "summarize used" is a good place to start.
I am working on a python script that pulls data from an Access database via ODBC, and pulls it into a sqllite database managed by django.
The script takes a fair while to run, and so I was investigating where the bottle necks are and noticed in Task Manager that when running the script python only has a relatively small CPU usage <5% (6 core 12 thread system) but "Antimalware Service Executable" and Windows explorer" jump from virtually nothing to 16% and 10% respectively.
I attempted to add exclusions to windows defender to the python directory, the source code directory, and the location of the Access DB file but this do not make any noticeable effect.
As a small amount of background, the script runs thousands of queries so IO will being accessed quite frequently.
Is there any troubleshooting I can do to diagnose why this is happening and/or if it affecting performance.
I am running python scripts on the pi using another voice recognition python script at the moment. I now also want to run these scripts from the internet. According to a little bit of research, one way could be setting up a small webserver on the pi such as lighttpd and create a database on it. Then create another small script which periodically checks a value in the database. This value can be modified over the internet. According to the value I will be using the voice recognition script or using the other values in the database to run the python scripts.
My question is, is this method efficient or is there a simpler method to do this? I am fairly competent at python but I am totally new to web servers and databases. However I do not mind to spend time learning how to use them.
Thanks in advance!
One route that I personally chose, was the configure the Pi for use as a LAMP (Liniux Apache MySQL Python). Some great instructions can be found here: http://www.wikihow.com/Make-a-Raspberry-Pi-Web-Server
If this is overkill, have you considered using cron jobs to automate your pythons scripts? You could then set up times at which your two scripts would run, and with a little inter-process communication you have two entities that are aware of each other. http://www.thesitewizard.com/general/set-cron-job.shtml
I have a large project that runs on an application server. It does pipelined processing of large batches of data and works fine on one Linux system (the old production environment) and one windows system (my dev environment).
However, we're upgrading our infrastructure and moving to a new linux system for production, based on the same image used for the existing production system (we use AWS). The python version (2.7) and libraries should be identical because of this, we're verifying this on our own using file hashes, also.
Our issue is that when we attempt to start processing on the new server, we receive a very strange output written to standard out followed by hanging of the server, "Removing descriptor: [some number]". I cannot duplicate this on the dev machine.
Has anyone ever encountered behavior like this in python before? Besides modules in the python standard library we are also using eventlet and beautifulsoup. In the standard library we lean heavily on urllib2, re, cElementTree, and multiprocessing (mostly the pools).
wberry was correct in his comment, I was running into a max descriptors per process issue. This seems highly dependent on operating system. Reducing the size of the batches I was having each processor handle to below the file descriptor limit of the process solved the problem.
I am new to python and struggling to find how to control the amount of memory a python process can take? I am running python on a Cento OS machine with more than 2 GB of main memory size. Python is taking up only 128mb of this and I want to allocate it more. I tried to search all over the internet on this for last half an hour and found absolutely nothing! Why is it so difficult to find information on python related stuff :(
I would be happy if someone could throw some light on how to configure python for various things like allowed memory size, number of threads etc.
A link to a site where most controllable parameters of python are described would be appreciated well.
Forget all that, python just allocates more memory as needed, there is not a myriad of comandline arguments for the VM as in java, just let it run. For all comandline switches you can just run python -h or read man python.
Are you sure that the machine does not have a 128M process limit? If you are running the python script as a CGI inside a web server, it is quite likely that there is a process limit set - you will need to look at the web server configuration.