I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this?
More info about my setup:
Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.
Related
I have a Flask app that I run with uWSGI. I have configured logging to file in the Python/Flask application, so on service start it logs that the application has been started.
I want to be able to do this when the service stops as well, but I don't know how to implement it.
For example, if I run the uwsgi app in console, and then interrupt it with Ctrl-C, I get only uwsgi logs ("Goodbye to uwsgi" etc) in console, but no logs from the stopped python application. Not sure how to do this.
I would be glad if someone advised on possible solutions.
Edit:
I've tried to use Python's atexit module, but the function that I registered to run on exit is executed not one time, but 4 times (which is the number of uWSGI workers).
There is no "stop" event in WSGI, so there is no way to detect when the application stops, only when the server / worker stops.
I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue?
Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.
I want to know, how to restart my gunicorn server automatically. after my django project code has been changed. currently now i am doing manual restart after i made changes , just kill the process and reload it. but it is not a good way. so i want to know how to do the same automatically after code getting changed . i am using nginx too.
Take a look at the reload setting:
https://docs.gunicorn.org/en/stable/settings.html#debugging
reload
--reload False Restart workers when code changes.
This setting is intended for development. It will cause workers to be
restarted whenever application code changes.
I have written a module in Python and want it to run continuously once started and need to stop it when I need to update other modules. I will likely be using monit to restart it, if module has crashed or is otherwise not running.
I was going through different techniques like Daemon, Upstart and many others.
Which is the best way to go so that I use that approach through out my all new modules to keep running them forever?
From your mention of Upstart I will assume that this question is for a service being run on an Ubuntu server.
On an Ubuntu server an upstart job is really the simplest and most convenient option for creating an always on service that starts up at the right time and can be stopped or reloaded with familiar commands.
To create an upstart service you need to add a single file to /etc/init. Called <service-name>.conf. An example script looks like this:
description "My chat server"
author "your#email-address.com"
start on runlevel [2345]
stop on runlevel [!2345]
env AN_ENVIRONMENTAL_VARIABLE=i-want-to-set
respawn
exec /srv/applications/chat.py
This means that everytime the machine is started it will start the chat.py program. If it dies for whatever reason it will restart it. You don't have to worry about double forking or otherwise daemonizing your code. That's handled for you by upstart.
If you want to stop or start your process you can do so with
service chat start
service chat stop
The name chat is automatically found from the name of the .conf file inside /etc/init
I'm only covering the basics of upstart here. There are lots of other features to make it even more useful. All available by running man upstart.
This method is much more convenient, than writing your own daemonization code. A 4-8 line config file for a built in Ubuntu component is much less error prone than making your code safely double fork and then having another process monitor it to make sure it doesn't go away.
Monit is a bit of a red herring. If you want downtime alerts you will need to run a monitoring program on a separate server anyway. Rely on upstart to keep the process always running on a server. Then have a different service that makes sure the server is actually running. Downtime happens for many different reasons. A process running on the same server will tell you precisely nothing if the server itself goes down. You need a separate machine (or a third party provider like pingdom) to alert you about that condition.
You could check out supervisor. What it is capable of is starting a process at system startup, and then keeping it alive until shutdown.
The simplest configuration file would be:
[program:my_script]
command = /home/foo/bar/venv/bin/python /home/foo/bar/scripts/my_script.py
environment = MY_ENV_VAR=FOO, MY_OTHER_ENV_VAR=BAR
autostart = True
autorestart = True
Then you could link it to /etc/supervisord/conf.d, run sudo supervisorctl to enter management console of supervisor, type in reread so that supervisor notices new config entry and update to display new programs on the status list.
To start/restart/stop a program you could execute sudo supervisorctl start/restart/stop my_script.
I used old-style initscript with start-stop-daemon utility.Look at skel in /etc/init.d
I use IDEA 10.5 for my Flask experimentation. Flask has en embedded test server (like Django does)
When I launch my test class, the dev server launches as well on port 5000. All good.
* Running on http://127.0.0.1:5000/
When I click on the "Stop process" button (red square), I get the message saying the process is finished :
Process finished with exit code 143
However the server is still alive (responds to requests) and I can see I still have a python process running.
Obviously this prevents me from relaunching the test straight away, I have to kill the server process first.
How do you manage to get both your program and the server ending at the same time ?
I guess what happens is that you start your flask app which then is forking the development server as a new process. If you stop the app the forked process is still running.
This looks like a problem, that cannot easily be solved within the means of your IDE. You could add something to your main to kill the already running server process, before starting the app again, but that seems ugly.
But why don't you just start your app with app.run(debug=True) as described in flask doc? The server will reload automatically everytime you changed your app so you don't have to stop and restart it manually.
EDIT:
Something a bit quirky just came to my mind: if you just need a comfortable way to kill the server from within the IDE all you have to do is to introduce a syntactical error in one of the places the reloader monitors, save the file and the server will choke on it and die :)
This doesn't happen anymore with newer versions (tested with PyCharm 2.0)