I wrote a simple HTTP server in python to manage a database hosted on a server via a web UI. It is perfectly functional and works as intended. However it has one huge problem, it won't stay put. It will work for an hour or so, but if left unused for long periods of time when returning to use it I have to re-initialize it every time. Right now the method I use to make it serve is:
def main():
global db
db = DB("localhost")
server = HTTPServer(('', 8080), MyHandler)
print 'started httpserver...'
server.serve_forever()
if __name__ == '__main__':
main()
I run this in the background on a linux server so I would run a command like sudo python webserver.py & to detach it, but as I mentioned previously after a while it quits. Any advice is appreciated cause as it stands I don't see why it shuts down.
You can write a UNIX daemon in Python using the python-daemon package, or a Windows service using the pywin32.
Unfortunately, I know of no "portable" solution to writing daemon / service processes (in Python, or otherwise).
Here's one piece of advice in a story about driving. You certainly want to drive safely (figure out why your program is failing and fix it). In the (rare?) case of a crash, some monitoring infrastructure, like monit, can be helpful to restart crashed processes. You probably wouldn't want to use it to paper over a crash just like you wouldn't want to deploy your air bag every time you stopped the car.
Well, first step is to figure out why it's crashing. There's two likely possibilities:
The serve_forever call is throwing an exception.
The python process is crashing/being terminated.
In the former case, you can make it live forever by wrapping it in a loop, with a try-except. Probably a good idea to log the error details.
The latter case is a bit trickier, because it could be caused by a variety of things. Does it happen if you run the script in the foreground? If not, maybe there's some kind of maintenance service running that is terminating your script?
Not really a complete answer, but perhaps enough to help you diagnose the problem.
Have you tried running it from inside a screen session?
$ screen -L sudo python webserver.py
As an alternative to screen there is NoHup which will ensure the process carries on running after your logged out.
Its worth checking the logs to see why its killed/quitting as well as it may not be related to the operating system but an internal fault.
Related
I have a python script that reads data from an OPCDA server and then push it to InfluxDB.
So basically it connects to the OPCDA using the OpenOPC library and to InfluxDB using the InfluxDB Python client and then starts an infinite while loop that runs every 5 seconds to read and push data to the database.
I have installed the script as a Service using NSSM. What is the best practice to ensure that the script is running 24/7 ? How to avoid crashes ?
Should i daemonize the script ?
Thank you in advance,
Bnjroos
I suggest at least to add logging at the script level. You could also use custom Exit Codes from python so NSSM knows to report failure. Your failure would probably be when connecting to your services so, i.e. netowrk down or something so you could write custom exceptions for NSSM to restart. If it's running every 5 seconds you would probably know very soon.
Ensuring availability and avoiding crashes is about your code more than infrastructure, hence the above recommendations.
I believe using NSSM (for scheduling and such) is better than daemonizing, since you're basically adding functionality of NSSM in your script and potentially adding more code that may fail.
Guys and ladies. I am new to programming. I have written some script.It just checks whether some data is correct or not. I want that script to run 24*7 on Microsoft server at job (not on my PC). Please let me know how to do that.
thanks in advance
Aside from general server set-up, you will just need to download Python like you would on any server.
As for the running, something like python yourScript.py would work fine. In order to run it 24/7, you need to put your entire script in a while(True): loop so that it never stops running. Note that you should also include some DECENT error handling in the event on an issue so that it doesn't just crash.
I am facing the following problem and I am not sure if my approach is anywhere near 'right'.
I've built a Django application that handles students' assignments for a programming subject at university. The original version of this application (https://github.com/elcoya/seal) used a chroot'd daemon to get the code, delivered by the students, place a bash script along-side that code and execute de bash, which could contain any kind of opeartions, like building and testing the students' code. So far... so good. However, running this daemon was a bit of a headache. Since it ran within a jail, the binded /proc, within that jail, became obsolete every time the server was restarted (it was restarted from time to time :( ) or some error occur in the daemon, the process died or was killed, and therefor, stop doing it's job of "correcting" the students' deliveries.
To prevent this errors from happening, and have a more trust worthy automatic correction service, I would like to install a 'django-kronos' task (which runs from the crontab in the server) to do the same job. This would be great, but that would mean that from my Django stack code, I would need to move into the chroot to run the mentioned bash script.
SO suggests this post, but it is from 2012, and it kind of advises against what I am trying to do. Am I missing something here? Is os.chroot(/path/to/jail) the way to go?
You could run your user scripts inside a Docker container. Docker gives you all the benefit of of a jail and much more. For instance, it can restart a container for you if it the host running it were to be rebooted: https://docs.docker.com/engine/admin/start-containers-automatically/
I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.
I'm running Django on Linux using fcgi and Lighttpd. Every now and again (about once a day) the server just dies. I'm using the latest stable release of Django, Python and Lighttpd.
The only thing I can think of is that my program is opening a lot of files and executing a lot of external processes, but I'm fairly sure that side of things is watertight.
Looking at the error and access logs, there's nothing exceptional happening (i.e. load isn't above normal). On those occasions where I have had exceptions from Python, these have shown up in the error.log, but when this crash happens I get nothing.
Is there any way of finding out why the process died? Short of putting logging statements on every single line? Obviously I can't reproduce this so I don't know exactly where to look.
Edit
It's the django process that's dying. I'm running the server with manage.py runfcgi daemonize=true method=threaded host=127.0.0.1 port=12345
You could edit manage.py to redirect stderr to a file, assuming runfcgi doesn't do that itself:
import sys
if sys.argv[1] == "runfcgi":
sys.stderr = open("/path/to/my/django-error.log", "a")
Is this on your server? (do you own the box?). I've had that problem on shared hosting, and the host was just killing long processes. Do you know if your fcgi is receiving a SIGTERM?
Have had the same problems. Not only do they die without warning or reason they leak like crazy too with threads being stuck without a master process. We solved this problem by having a cronjob run every 5 minutes that checks if the port number is up and running and if not restart.
By the way, we've now (slowly migrating) given up on fcgi and moved over to uwsgi.