Daemon vs Upstart for python script - python

I have written a module in Python and want it to run continuously once started and need to stop it when I need to update other modules. I will likely be using monit to restart it, if module has crashed or is otherwise not running.
I was going through different techniques like Daemon, Upstart and many others.
Which is the best way to go so that I use that approach through out my all new modules to keep running them forever?

From your mention of Upstart I will assume that this question is for a service being run on an Ubuntu server.
On an Ubuntu server an upstart job is really the simplest and most convenient option for creating an always on service that starts up at the right time and can be stopped or reloaded with familiar commands.
To create an upstart service you need to add a single file to /etc/init. Called <service-name>.conf. An example script looks like this:
description "My chat server"
author "your#email-address.com"
start on runlevel [2345]
stop on runlevel [!2345]
env AN_ENVIRONMENTAL_VARIABLE=i-want-to-set
respawn
exec /srv/applications/chat.py
This means that everytime the machine is started it will start the chat.py program. If it dies for whatever reason it will restart it. You don't have to worry about double forking or otherwise daemonizing your code. That's handled for you by upstart.
If you want to stop or start your process you can do so with
service chat start
service chat stop
The name chat is automatically found from the name of the .conf file inside /etc/init
I'm only covering the basics of upstart here. There are lots of other features to make it even more useful. All available by running man upstart.
This method is much more convenient, than writing your own daemonization code. A 4-8 line config file for a built in Ubuntu component is much less error prone than making your code safely double fork and then having another process monitor it to make sure it doesn't go away.
Monit is a bit of a red herring. If you want downtime alerts you will need to run a monitoring program on a separate server anyway. Rely on upstart to keep the process always running on a server. Then have a different service that makes sure the server is actually running. Downtime happens for many different reasons. A process running on the same server will tell you precisely nothing if the server itself goes down. You need a separate machine (or a third party provider like pingdom) to alert you about that condition.

You could check out supervisor. What it is capable of is starting a process at system startup, and then keeping it alive until shutdown.
The simplest configuration file would be:
[program:my_script]
command = /home/foo/bar/venv/bin/python /home/foo/bar/scripts/my_script.py
environment = MY_ENV_VAR=FOO, MY_OTHER_ENV_VAR=BAR
autostart = True
autorestart = True
Then you could link it to /etc/supervisord/conf.d, run sudo supervisorctl to enter management console of supervisor, type in reread so that supervisor notices new config entry and update to display new programs on the status list.
To start/restart/stop a program you could execute sudo supervisorctl start/restart/stop my_script.

I used old-style initscript with start-stop-daemon utility.Look at skel in /etc/init.d

Related

Python & Django: Working on a chroot jail to run a single bash script

I am facing the following problem and I am not sure if my approach is anywhere near 'right'.
I've built a Django application that handles students' assignments for a programming subject at university. The original version of this application (https://github.com/elcoya/seal) used a chroot'd daemon to get the code, delivered by the students, place a bash script along-side that code and execute de bash, which could contain any kind of opeartions, like building and testing the students' code. So far... so good. However, running this daemon was a bit of a headache. Since it ran within a jail, the binded /proc, within that jail, became obsolete every time the server was restarted (it was restarted from time to time :( ) or some error occur in the daemon, the process died or was killed, and therefor, stop doing it's job of "correcting" the students' deliveries.
To prevent this errors from happening, and have a more trust worthy automatic correction service, I would like to install a 'django-kronos' task (which runs from the crontab in the server) to do the same job. This would be great, but that would mean that from my Django stack code, I would need to move into the chroot to run the mentioned bash script.
SO suggests this post, but it is from 2012, and it kind of advises against what I am trying to do. Am I missing something here? Is os.chroot(/path/to/jail) the way to go?
You could run your user scripts inside a Docker container. Docker gives you all the benefit of of a jail and much more. For instance, it can restart a container for you if it the host running it were to be rebooted: https://docs.docker.com/engine/admin/start-containers-automatically/

google compute engine connection keeps disconecting

I have an instance on google compute engine, connecting to it by terminal: gcutil ssh, on it I have several DJango servieces. I run the server using: python manage.py runserver 0.0.0.0:8000. the services are being called from an iPhone application IOS 6.1
the problem I'm facing is that every few minutes (between 10- 15) I'm getting disconnected and have to reconnect and run the server again.
Why is my server being disconnected and how can I keep the it running?
Try using supervisor.d. It sounds like for what your trying to do, supervisor can keep your process up and running. http://supervisord.org/
Here's an example conf:
[program:app]
process_name = app-%(process_num)s
command =python /home/ubuntu/production/current/app/src/app.py --port=%(process_num)s
# Increase numprocs to run multiple processes on different ports.
# Note that the chat demo won't actually work in that configuration
# because it assumes all listeners are in one process.
numprocs = 4
numprocs_start = 8000
This is for running multiple processes of the same program. Just change around the args and it should work for you.
SSH normally times out after a period of inactivity, and that may be what is happening here. If so, this article might be useful to help configure SSH to send a regular message so connections are less likely to be dropped.
However, the core issue is that you'd like software you started at the terminal to keep running even when you're logged out. Consider using screen or tmux to host your shell sessions. This will allow your shell software to run even when you are not connected, and for you to pick up right where you left off when you reconnect. Here is a nice getting started post about tmux.
Once you're ready for production, take a look at the Django deployment docs.

python run shell script that spawns detached child processes

Updated post:
I have a python web application running on a port. It is used to monitor some other processes and one of its features is to allow users to restart his own processes. The restart is done through invoking a bash script, which will proceed to restart those processes and run them in the background.
The problem is, whenever I kill off the python web application after I have used it to restart any user's processes, those processes will take take over the port used by the python web application in a round-robin fashion, so I am unable to restart the python web application due to the port being bounded. As a result, I must kill off the processes involved in the restart until nothing occupies the port the python web application uses.
Everything is ok except for those processes occupying the port. That is really undesirable.
Processes that may be restarted:
redis-server
newrelic-admin run-program (which spawns another web application)
a python worker process
UPDATE (6 June 2013): I have managed to solve this problem. Look at my answer below.
Original Post:
I have a python web application running on a port. This python program has a function that calls a bash script. The bash script spawns a few background processes, then exits.
The problem is, whenever I kill the python program, the background processes spawned by the bash script will take over and occupy that same port.
Specifically the subprocesses are:
a redis server (with daemonize = true in the configuration file)
newrelic-admin run-program (spawns a web application)
a python worker process
Update 2: I've tried running these with nohup. Only the python worker process doesnt attempt to take over the port after I kill the python web application. The redis server and newrelic-admin still do.
I observed this problem when I was using subprocess.call in the python program to run the bash script. I've tried a double fork method in the python program before running the bash script, but it results in the same problem.
How can I prevent any processes spawned from the bash script from taking over the port?
Thank you.
Update: My intention is that, those processes spawned by the bash script should continue running if the python application is killed off. Currently, they do continue running after I kill off the python application. The problem is, when I kill off the python application, the processes spawned by the bash script start to take over the port in a round-robin fashion.
Update 3: Based on the output I see from 'pstree' and 'ps -axf', processes 1 and 2 (the redis server and the web app spawned by newrelic-admin run-program) are not child processes of the python web application. This makes it even weirder that they take over the port that the python web application occupies when I kill it... Anyone knows why?
Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper:
subprocess.call
subprocess.Popen
execve
the double fork method along with one of the above (http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/)
By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application.
After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and other articles on the Internet and from my own experience, recalled my experience of ssh'ing into a remote and starting a detached screen session, then logging out, and logging in again some time later to discover the screen session still alive.
So I thought, hey, what the heck, nothing works so far, so I might as well try using screen to see if it can solve my problem. And to my great surprise and joy it does! So I am posting this solution hopefully to help those who are facing the same issue.
In the bash script, I simply started the processes using a named screen process. For instance, for the redis application, I might start it like this:
screen -dmS redisScreenName redis-server redis.conf
So those processes will keep running on those detached screen sessions they were started with. In this case, I did not daemonize the redis process.
To kill the screen process, I used:
screen -S redisScreenName -X quit
However, this does not kill the redis-server. So I had to kill it separately.
Now, in the python web application, I can just use subprocess.call to execute the bash script, which will spawn detached screen sessions (using 'screen -dmS') which run the processes I want to spawn. And when I kill off the python web application, none of the spawned processes take over its port. Everything works smoothly.

Starting and stopping server script

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

How do I keep a python HTTP Server up forever?

I wrote a simple HTTP server in python to manage a database hosted on a server via a web UI. It is perfectly functional and works as intended. However it has one huge problem, it won't stay put. It will work for an hour or so, but if left unused for long periods of time when returning to use it I have to re-initialize it every time. Right now the method I use to make it serve is:
def main():
global db
db = DB("localhost")
server = HTTPServer(('', 8080), MyHandler)
print 'started httpserver...'
server.serve_forever()
if __name__ == '__main__':
main()
I run this in the background on a linux server so I would run a command like sudo python webserver.py & to detach it, but as I mentioned previously after a while it quits. Any advice is appreciated cause as it stands I don't see why it shuts down.
You can write a UNIX daemon in Python using the python-daemon package, or a Windows service using the pywin32.
Unfortunately, I know of no "portable" solution to writing daemon / service processes (in Python, or otherwise).
Here's one piece of advice in a story about driving. You certainly want to drive safely (figure out why your program is failing and fix it). In the (rare?) case of a crash, some monitoring infrastructure, like monit, can be helpful to restart crashed processes. You probably wouldn't want to use it to paper over a crash just like you wouldn't want to deploy your air bag every time you stopped the car.
Well, first step is to figure out why it's crashing. There's two likely possibilities:
The serve_forever call is throwing an exception.
The python process is crashing/being terminated.
In the former case, you can make it live forever by wrapping it in a loop, with a try-except. Probably a good idea to log the error details.
The latter case is a bit trickier, because it could be caused by a variety of things. Does it happen if you run the script in the foreground? If not, maybe there's some kind of maintenance service running that is terminating your script?
Not really a complete answer, but perhaps enough to help you diagnose the problem.
Have you tried running it from inside a screen session?
$ screen -L sudo python webserver.py
As an alternative to screen there is NoHup which will ensure the process carries on running after your logged out.
Its worth checking the logs to see why its killed/quitting as well as it may not be related to the operating system but an internal fault.

Categories

Resources