remotely start Python program in background - python

I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?

If I understand your question right, I think you're making this far too complicated. os.fork() is for multiprocessing, not for running a program in the background.
Let's say for the sake of discussion that you wanted to run program.sh and collect what it sends to standard output. To do this with fabric, create locally:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh > output 2> /dev/null < /dev/null &')
Then, locally, run:
fab -H remotebox runmyprogram
The program will execute remotely, but fabric will not wait for it to finish. You'll need to harvest the output files later, perhaps using scp. The "&" makes this run in the background on the remote machine, and output redirection is necessary to avoid a hung fabric session.
If you don't need to use fabric, there are easier ways of doing this. You can ssh individually and run
nohup ./program.sh > output &
then come back later to check output.
If this is something that you'll do on a regular basis, this might be the better option, since you can just set up a cron job to run every so often, and then collect the output whenever you want.
If you'd rather not harvest the output files later, you can use:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh')
Then, on your local machine:
fab -H remotebox runmyprogram > output &
The jobs will run remotely, and put all their output back into the local output file. This runs in the background on your local machine, so you can do other things. However, if the connection between your local and remote machines might be interrupted, it's better to use the first approach so the output is always safely stored on the remote machines.

For those who came across this post in the future. Python-daemon can still work. It is just that be sure to load the shelve dicts within the same process. So previously the shelve dicts is loaded in parent process, when python-daemon spawns a child process, the dict handler is not passed correctly. When we fix this, everything works again.
Thanks for those suggesting valuable comments on this thread!

Related

What is the recommended way of running a web application?

I'm doing a Python webserver (e.g. using Flask or Bottle, etc.), and I would like to start it, and be able to close the SSH terminal and let it running.
Which way would be the pythonic recommended way to do it?
Create a daemon, and use python myapp.py start, python myapp.py stop. However the python-daemon module has nearly no doc, doesn't support triggering an action just before exiting (I added a few lines to it to support it), so it's a bit a hack of an undocumented/not-really-maintained module, so even if it works, I'm not 100% happy with it.
Use nohup python myapp.py &, but then the drawback is that you have to ps aux |grep py, then find the relevant PID and kill 12345 to stop it. Here again, it doesn't allow to do actions before stopping (e.g. save the database to disk, etc.), so it's not a very nice solution.
screen -dmS myapp python myapp.py to start, then you can log out from SSH terminal. Then later you can connect to it again with screen -r myapp, and then CTRL+C can stop it (provided KeyboardInterrupt is well handled). That's what I would use currently. But I'm not sure if using screen to let a server run forever is a good idea (what happens if the logging is really verybose? also is there a risk that introducing the screen layer would make it bloated?)
Another cleaner solution? I hope there's a cleaner solution than 1, 2, 3 that all have drawbacks.
Note: I would like to avoid installing new managers (upstart or supervisor), and do it with the least number of tools possible, to avoid new layers of complexity.

Killing a subprocess started via sudo

In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()

Starting and stopping server script

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

How do I keep a python HTTP Server up forever?

I wrote a simple HTTP server in python to manage a database hosted on a server via a web UI. It is perfectly functional and works as intended. However it has one huge problem, it won't stay put. It will work for an hour or so, but if left unused for long periods of time when returning to use it I have to re-initialize it every time. Right now the method I use to make it serve is:
def main():
global db
db = DB("localhost")
server = HTTPServer(('', 8080), MyHandler)
print 'started httpserver...'
server.serve_forever()
if __name__ == '__main__':
main()
I run this in the background on a linux server so I would run a command like sudo python webserver.py & to detach it, but as I mentioned previously after a while it quits. Any advice is appreciated cause as it stands I don't see why it shuts down.
You can write a UNIX daemon in Python using the python-daemon package, or a Windows service using the pywin32.
Unfortunately, I know of no "portable" solution to writing daemon / service processes (in Python, or otherwise).
Here's one piece of advice in a story about driving. You certainly want to drive safely (figure out why your program is failing and fix it). In the (rare?) case of a crash, some monitoring infrastructure, like monit, can be helpful to restart crashed processes. You probably wouldn't want to use it to paper over a crash just like you wouldn't want to deploy your air bag every time you stopped the car.
Well, first step is to figure out why it's crashing. There's two likely possibilities:
The serve_forever call is throwing an exception.
The python process is crashing/being terminated.
In the former case, you can make it live forever by wrapping it in a loop, with a try-except. Probably a good idea to log the error details.
The latter case is a bit trickier, because it could be caused by a variety of things. Does it happen if you run the script in the foreground? If not, maybe there's some kind of maintenance service running that is terminating your script?
Not really a complete answer, but perhaps enough to help you diagnose the problem.
Have you tried running it from inside a screen session?
$ screen -L sudo python webserver.py
As an alternative to screen there is NoHup which will ensure the process carries on running after your logged out.
Its worth checking the logs to see why its killed/quitting as well as it may not be related to the operating system but an internal fault.

running a python script on a remote computer

I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.

Categories

Resources