What is the recommended way of running a web application? - python

I'm doing a Python webserver (e.g. using Flask or Bottle, etc.), and I would like to start it, and be able to close the SSH terminal and let it running.
Which way would be the pythonic recommended way to do it?
Create a daemon, and use python myapp.py start, python myapp.py stop. However the python-daemon module has nearly no doc, doesn't support triggering an action just before exiting (I added a few lines to it to support it), so it's a bit a hack of an undocumented/not-really-maintained module, so even if it works, I'm not 100% happy with it.
Use nohup python myapp.py &, but then the drawback is that you have to ps aux |grep py, then find the relevant PID and kill 12345 to stop it. Here again, it doesn't allow to do actions before stopping (e.g. save the database to disk, etc.), so it's not a very nice solution.
screen -dmS myapp python myapp.py to start, then you can log out from SSH terminal. Then later you can connect to it again with screen -r myapp, and then CTRL+C can stop it (provided KeyboardInterrupt is well handled). That's what I would use currently. But I'm not sure if using screen to let a server run forever is a good idea (what happens if the logging is really verybose? also is there a risk that introducing the screen layer would make it bloated?)
Another cleaner solution? I hope there's a cleaner solution than 1, 2, 3 that all have drawbacks.
Note: I would like to avoid installing new managers (upstart or supervisor), and do it with the least number of tools possible, to avoid new layers of complexity.

Related

How to convert a wxPython app to run as a headless unix job?

I am currently running 2.8.9.1 of wx in a linux box.
The App I am working on was originally written to run on MS Windows. We are planning to port part of the core logic to linux and run it as a process.
The problem is that the linux box is headless. We will not have a X-windows environment. But the existing codebase was written in such a way that it is tightly coupled with the wx layer.
For example, I have a couple of classes which are subclass of wx.EvtHandler
I probably can rewrite them one by one, but it is really not ideal.
In the new wx Pheonix, there is an AppConsole class which seems to be able to start an event loop without X-Windows. However it is not available in my local version of wx.
The Goal is ultimately to run the code in a cron job
I am basically looking for some advices/pointers as to how to tackle this issue. It would be nice to avoid as much rewrite as possible.
One way is to to use your local display. Ssh into your server with option -X to redirect the display to your workstation
ssh -X server
And on your server start the application, which will use your workstation's display automatically.
For Windows, there exists Xming or Cygwin for example.
As an alternative, you can use Xvfb, which provides a headless X server. You can then start your application by using xvfb-run as
xvfb-run my_wx_application
It turns out the rewriting is not too bad.
There are only three dependencies on wx objects in my code
1) subclassing wx.EvtHandler
2) wx.CallLater
3) wx.CallAfter
So the first case requires a reimplmentation
The other can be replaced by threading.Timer easily.

Starting and stopping server script

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

How do I keep a python HTTP Server up forever?

I wrote a simple HTTP server in python to manage a database hosted on a server via a web UI. It is perfectly functional and works as intended. However it has one huge problem, it won't stay put. It will work for an hour or so, but if left unused for long periods of time when returning to use it I have to re-initialize it every time. Right now the method I use to make it serve is:
def main():
global db
db = DB("localhost")
server = HTTPServer(('', 8080), MyHandler)
print 'started httpserver...'
server.serve_forever()
if __name__ == '__main__':
main()
I run this in the background on a linux server so I would run a command like sudo python webserver.py & to detach it, but as I mentioned previously after a while it quits. Any advice is appreciated cause as it stands I don't see why it shuts down.
You can write a UNIX daemon in Python using the python-daemon package, or a Windows service using the pywin32.
Unfortunately, I know of no "portable" solution to writing daemon / service processes (in Python, or otherwise).
Here's one piece of advice in a story about driving. You certainly want to drive safely (figure out why your program is failing and fix it). In the (rare?) case of a crash, some monitoring infrastructure, like monit, can be helpful to restart crashed processes. You probably wouldn't want to use it to paper over a crash just like you wouldn't want to deploy your air bag every time you stopped the car.
Well, first step is to figure out why it's crashing. There's two likely possibilities:
The serve_forever call is throwing an exception.
The python process is crashing/being terminated.
In the former case, you can make it live forever by wrapping it in a loop, with a try-except. Probably a good idea to log the error details.
The latter case is a bit trickier, because it could be caused by a variety of things. Does it happen if you run the script in the foreground? If not, maybe there's some kind of maintenance service running that is terminating your script?
Not really a complete answer, but perhaps enough to help you diagnose the problem.
Have you tried running it from inside a screen session?
$ screen -L sudo python webserver.py
As an alternative to screen there is NoHup which will ensure the process carries on running after your logged out.
Its worth checking the logs to see why its killed/quitting as well as it may not be related to the operating system but an internal fault.

remotely start Python program in background

I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?
If I understand your question right, I think you're making this far too complicated. os.fork() is for multiprocessing, not for running a program in the background.
Let's say for the sake of discussion that you wanted to run program.sh and collect what it sends to standard output. To do this with fabric, create locally:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh > output 2> /dev/null < /dev/null &')
Then, locally, run:
fab -H remotebox runmyprogram
The program will execute remotely, but fabric will not wait for it to finish. You'll need to harvest the output files later, perhaps using scp. The "&" makes this run in the background on the remote machine, and output redirection is necessary to avoid a hung fabric session.
If you don't need to use fabric, there are easier ways of doing this. You can ssh individually and run
nohup ./program.sh > output &
then come back later to check output.
If this is something that you'll do on a regular basis, this might be the better option, since you can just set up a cron job to run every so often, and then collect the output whenever you want.
If you'd rather not harvest the output files later, you can use:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh')
Then, on your local machine:
fab -H remotebox runmyprogram > output &
The jobs will run remotely, and put all their output back into the local output file. This runs in the background on your local machine, so you can do other things. However, if the connection between your local and remote machines might be interrupted, it's better to use the first approach so the output is always safely stored on the remote machines.
For those who came across this post in the future. Python-daemon can still work. It is just that be sure to load the shelve dicts within the same process. So previously the shelve dicts is loaded in parent process, when python-daemon spawns a child process, the dict handler is not passed correctly. When we fix this, everything works again.
Thanks for those suggesting valuable comments on this thread!

running a python script on a remote computer

I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.

Categories

Resources