I have a django project running a wsgi application using gunicorn. Ahhh too much for the python newbie like me.
I have a below gunicorn command which runs at the application startup.
exec ./env/bin/gunicorn $wsgi:application \
--config djanqloud/gunicorn-prometheus-config.py \
--name "djanqloud" \
--workers 3 \
--timeout 60 \
--user=defaultuser --group=nogroup \
--log-level=debug \
--bind=0.0.0.0:8002
Below is my wsgi.py file:
import os
from django.core.wsgi import get_wsgi_application
os.environ['DJANGO_SETTINGS_MODULE'] = 'djanqloud.settings'
application = get_wsgi_application()
gunicorn command shown from "ps- -eaf" command in a docker container:
/opt/gq-console/env/bin/python2 /opt/gq-console/env/bin/gunicorn wsgi:application --config /opt/gq-console//gunicorn-prometheus-config.py --name djanqloud --workers 3 --timeout 60 --user=gq-console --group=nogroup --log-level=debug --bind=0.0.0.0:8002
Their is one simple thread which I create inside django project which are killed when above worker threads are killed.
My question is:
Is there anyway where I can create my threads AGAIN when the above worker threads are auto restarted ?
I have tried to override the get_wsgi_application() function in wsgi.py file but got below error while worker threads are booted:
AppImportError: Failed to find application object: 'application'.
I am new to python django and wsgi applications so please try to elaborate your answers.
Basically I am looking for a location where I can keep my startup code which runs when the wsgi worker threads are killed and autostarted.
Thanks.
To automatically start your application after server boot or restart automatically your application if gunicorn worker get killed use a process control system such as supervisor. Which will take care of restarting your gunicorn process automatically.
Related
I have multiple uWSGI vassals, all monitored by uwsgi emperor. I update the code for my app (Django) and I want the emperor to perform a clean reload of one of the vassals. To do that I
touch vassal-foo.ini
In the logs I see [emperor] reload the uwsgi instance vassal-foo.ini. This sounds promising, but the app is not reloaded. It continues to run the old version. Checking the process (PID) startup time, indeed, it has not been restarted.
Any hints what might cause this? Few things that might be uncommon:
Neither the emperor nor the vassal run in master mode
Emperor was installed with pip and runs under initctl
kill -9-ing the vassal triggers a correct reload (obviously)
I use symlinks
I have a secondary thread inside my python app (threading.Thread(target).start()) running with daemon=True
Things I tried and did not work:
Run the process without any additional threads (remove threading.Thread(target).start())
Touching with touch --no-dereference vassal-foo.ini
Starting emperor with --emperor-nofollow
vassal-foo.ini:
master = false
processes = 1
thunder-lock = true
enable-threads = true
socket = /tmp/%n.sock
chmod-socket = 666
vacuum = true
Emperor:
exec /tmp/uwsgi --emperor /tmp/configs/uwsgi/ --die-on-term --uid me --gid me --logto /tmp/logs/uwsgi-emperor.log
uWSGI version
$ uwsgi --version
2.0.17
The problem is that your vassal not run in master mode.
All the ways uwsgi reload a requires master process.
In emperor mode, when you touch the ini, the emperor send a sighup to the vassal and record in the log that the vassal is reload. But if the vassal doesn't has a master it just ignore the sighup. So that is why you see reload in log but nothing happened.
Hi I have deployed Django using UWSGI and Nginx using following tutorial http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
Everything is running fine. I face a challenge while updating python code. I don't know the efficient way to deploy new changes.
after hit and trial, I used following commands to deploy
git pull; sudo service uwsgi stop; sudo service nginx restart; sudo service uwsgi restart; /usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals
this command works fine. But I face following problems
Usagi runs in the foreground. Every time I make changes, a new UWSGI instance start running.
Due to multiple UWSGI instances, My AWS server get crashed, due to memory exhaustion.
I want to know what commands should I run to reflect changes in python code.
PS: in my previous APACHE Django setup, I only used to restart apache, is it possible to reflect changes by only restarting nginx.
Try this:
git pull
python manage.py migrate # to run any migrations
sudo service uwsgi restart
Press Ctrl + Z and then bg + enter
This should run the process in the background.
Please let me know if this works.
Please have a look at this for running uwsgi in background. create an .ini file /etc/uwsgi/sites/projectname.ini. The script would look like this(for ubuntu 16.04):
[uwsgi]
project = projectname
base = projectpath
chdir = %(base)/%(project)
home = %(base)/Env/%(project)
module = %(project).wsgi:application
master = true
processes = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 666
vacuum = true
(For ubuntu 16.04):
then create the following systemd script at /etc/systemd/system/uwsgi.service:
[Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
Refresh the state of the systemd init system with this new uWSGI service on board
sudo systemctl daemon-reload
In order to start the script you'll need to run the following:
sudo systemctl start uwsgi
In order to start uWSGI on reboot, you will also need:
sudo systemctl enable uwsgi
You can use the following to check its status:
systemctl status uwsgi
(For ubuntu 14.04):
Create an upstart script for uWSGI:
sudo nano /etc/init/uwsgi.conf
Then add following lines in the above created file:
description "uWSGI application server in Emperor mode"
start on runlevel [2345]
stop on runlevel [!2345]
setuid user
setgid www-data
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
In my Procfile I have the following:
worker: cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
However, when my app submits a task it never happens, but I think the celery worker is at least sort of working, because
heroku logs --app appname
every so often gives me one of these:
2016-07-22T07:53:21+00:00 app[heroku-redis]: source=REDIS sample#active-connections=14 sample#load-avg-1m=0.03 sample#load-avg-5m=0.09 sample#load-avg-15m=0.085 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664884.0kB sample#memory-free=13458244.0kB sample#memory-cached=187136kB sample#memory-redis=566800bytes sample#hit-rate=0.17778 sample#evicted-keys=0
Also, when I open up bash by running
heroku run bash --app appname
and then type in
cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
It immediately tells me the task has been received and then executes it. I would like to have this happen without me having to manually log in and execute the command - is that possible? Do I need a paid account on heroku to do that?
I figured it out. Turns out you also have to do
heroku ps:scale worker=1 --app appname
Or else you won't actually be running a worker.
I'm using flask as a webserver for my UI (it's a simple web interface which controls the recording using gstreamer on ubuntu from a webcam and a framegrabber simultaneously / kinda simple player)
Every time I need to run the command "python main.py" to run the server from command prompt manually.
I've tried the init.d solution or even writing a simple shell script and launching it every time after rebooting the system on start up but it fails to keep the server up and running till the end (just invokes the server and terminates it I guess)
is there any solution that could help me to start the webserver every time after booting the system on startup and keep it on and running?
I'd like to configure my system to boot directly into the browser so don't wanna have any need for more actions by the user.
Any Kind of suggestion/help is appreciated.
I'd like to suggest using supervisor, the documentation is here
for a very simple demo purpose, after you installed it and finish the set up, touch a new a file like this:
[program:flask_app]
command = python main.py
directory = /dir/to/your/app
autostart = true
autorestart = true
then
$ sudo supervisorctl update
Now, you should be good to go. The flask app will start every time after you boot you machine.(note: distribution package has already integrated into the service management infrastructure, if you're using others, see here)
to check whether you app is running:
$ sudo supervisorctl status
For production, you can use nginx+uwsgi+supervisor. The flask deployment documentation is here
One well documented solution is to use Gunicorn and Nginx server:
Install Components and setup a Python virtualenv with dependencies
Create the wsgi.py file :
from myproject import application
if __name__ == "__main__":
application.run()
That will be handled by Gunicorn :
gunicorn --bind 0.0.0.0:8000 wsgi
Configure Gunicorn with setting up a systemd config file: /etc/systemd/system/myproject.service :
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn
--workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Start the Gunicorn service at boot :
sudo systemctl start myproject
sudo systemctl enable myproject
I cannot seem to find a solid answer to this after scouring the web for answers. Currently, I have my directory set up this way:
flaskapp
-app
-intro_to_flask
+__init__.py
+config.py
+routes.py
+forms.py
-runserver.py
-Readme.md
-bin
-include
-lib
-view
Procfile
requirements.txt
So I not sure whether the Procfile is set up correctly...I have it set up this way:
web: gunicorn --pythonpath app runserver
However, when I run foreman start...heroku goes into a loop that keeps restarting the connection, I tried manually setting the port in the virtual environment export PORT=5001, but I am still getting the same error:
Running on http://127.0.0.1:5000/
12:21:20 web.1 | * Restarting with reloader
12:21:20 web.1 | 2014-02-22 12:21:20 [34532] [INFO] Starting gunicorn 18.0
12:21:20 web.1 | 2014-02-22 12:21:20 [34532] [ERROR] Connection in use: ('0.0.0.0', 5001)
Also, I have killed all gunicorn processes that are in used and tried running foreman start again...any ideas what could be going on?
Here is my runserver.py
from intro_to_flask import app
app.run(debug=True)
When you run your app on gunicorn you do not use the same starter script that starts the development server. All gunicorn needs to know is from where to import the application. In your case I think what you want in your Procfile is something like this:
web: gunicorn --pythonpath app intro_to_flask:app
Not sure if this will work as is or if you will need to make minor tweaks. The idea is that you need to give gunicorn the package or module that defines the application, then a colon, and then the application instance symbol.
I hope this helps.