Gunicorn not reloading a Django application - python

I'm getting inconsistent code-reloading behavior, with a Django 1.3 application and gunicorn 0.12.1, running inside a virtualenv.
Gunicorn does not reload my application properly, even with a restart of the specific gunicorn process PID. When I run a basic runserver (through Django, via the manage.py command) this is not an issue.
When I remove and recreate my virtualenv, gunicorn runs as expected with the new code.
Is there a Python cache or something? I also tried to remove all *.pyc files.

Try this:
$ kill -HUP masterpid
Also, have a look at some of the notes at the bottom of the following post.

I ran into variations of this problem as well -- as advised in the article linked to by Mr. Pokomy, killing the gunicorn master process with a HUP signal seems to do the trick.
One can set up auto-reloading on file save easily, if you use the python watchdog module; the setup is actually pretty self-explanatory, so here's a snippet from my development supervisord.conf file:
[program:ost2]
autostart=true
command=/usr/local/share/python/gunicorn --debug\
-c /Users/fish/Dropbox/ost2/ost2/utils/gunicorn/ost2-debug.py wsgi_debug
directory=/Users/fish/Dropbox/ost2/ost2
priority=500
; (etc)
[program:ost2-reloader]
autostart=true
autorestart=false
directory=/tmp
command=/usr/local/share/python/watchmedo shell-command\
--patterns="*.py;*.txt;*.html;*.css;*.less;*.js;*.coffee"\
-R --command='kill -HUP $(cat /usr/local/gunicorn/gunicorn.pid)'\
/Users/fish/Dropbox/ost2/ost2/
priority=996
; (etc)
(N.B. I put the slashes in that sample before newlines that aren't actually in the conf file -- I inserted those newlines for legibility; I am not sure if that works IRL)
The first program is the gunicorn process, which I run in a single thread during development in order to use the Werkzeug debugger. The second part is the interesting bit: that command says, "kill the process specified by the gunicorn PID file whenever there's a change in a file in this directory tree if the file's suffix matches one from this list".
Works like a charm for many including me. If you don't know it, watchdog is very useful and is worth a look, in its own right.

Related

django in docker not detecting SIGINT

This is a funny stackover flow question, because I have an answer, but the answer is a few years old. I can't find much content which is new, yet it seems like it would be quite high profile.
I am using docker-compose to start a few containers. Two of them use standard postgres and redis images.
The others are django 2.2.9 (and celery) This is a development environment, and I start them with docker compose, like this:
command: ./manage.py runserver 0.0.0.0:80
docker-compose stop sends a SIGINT. The redis and postgres containers exit quickly.
the django containers don't. docker-compose stop loses patience and kills them.
(and pycharm has infinite patience currently, and doesn't send a kill until I force it).
This post from 2015 referring to Django 1.9 (http://blog.lotech.org/fix-djangos-runserver-when-run-under-docker-or-pycharm.html) says that
"The quick fix is to specifically listen for SIGINT and SIGTERM in
your manage.py, and sys.kill() when you get them. So modify your
manage.py to add a signal handler:"
and it says how. The fix to change manage.py to catch SIGINT works and it's a handful of lines, although it doesn't work for celery which has its own startup.
So I can carry forward my own version of of manage.py and fix celery, but really is this still how to fix this?
I see the the dockerfile could have
STOPSIGNAL SIGINT
but it doesn't make and difference, I suppose because the entry point is managed by docker-compose.
Use the list variant of command:
command: ["./manage.py", "runserver", "0.0.0.0:80"]
See https://hynek.me/articles/docker-signals/ for details why.

How can I run luigid and luigi task within docker? [duplicate]

I have built a base image from Dockerfile named centos+ssh. In centos+ssh's Dockerfile, I use CMD to run ssh service.
Then I want to build a image run other service named rabbitmq,the Dockerfile:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD /opt/mq/sbin/rabbitmq-server start
To start rabbitmq container,run:
docker run -d -p 222:22 -p 4149:4149 rabbitmq
but ssh service doesn't work, it sense rabbitmq's Dockerfile CMD override centos's CMD.
How does CMD work inside docker image?
If I want to run multiple service, how to? Using supervisor?
You are right, the second Dockerfile will overwrite the CMD command of the first one. Docker will always run a single command, not more. So at the end of your Dockerfile, you can specify one command to run. Not more.
But you can execute both commands in one line:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD service sshd start && /opt/mq/sbin/rabbitmq-server start
What you could also do to make your Dockerfile a little bit cleaner, you could put your CMD commands to an extra file:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD sh /home/centos/all_your_commands.sh
And a file like this:
service sshd start &
/opt/mq/sbin/rabbitmq-server start
Even though CMD is written down in the Dockerfile, it really is runtime information. Just like EXPOSE, but contrary to e.g. RUN and ADD. By this, I mean that you can override it later, in an extending Dockerfile, or simple in your run command, which is what you are experiencing. At all times, there can be only one CMD.
If you want to run multiple services, I indeed would use supervisor. You can make a supervisor configuration file for each service, ADD these in a directory, and run the supervisor with supervisord -c /etc/supervisor to point to a supervisor configuration file which loads all your services and looks like
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
If you would like more details, I wrote a blog on this subject here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
While I respect the answer from qkrijger explaining how you can work around this issue I think there is a lot more we can learn about what's going on here ...
To actually answer your question of "why" ... I think it would for helpful for you to understand how the docker stop command works and that all processes should be shutdown cleanly to prevent problems when you try to restart them (file corruption etc).
Problem: What if docker did start SSH from it's command and started RabbitMQ from your Docker file? "The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container." Which process is docker tracking as PID 1 that will get the SIGTERM? Will it be SSH or Rabbit?? "According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly, and as a result their containers become filled with zombie processes over time."
Answer: Docker simply takes that last CMD as the one that will get launched as the root process with PID 1 and get the SIGTERM from docker stop.
Suggested solution: You should use (or create) a base image specifically made for running more than one service, such as phusion/baseimage
It should be important to note that tini exists exactly for this reason, and as of Docker 1.13 and up, tini is officially part of Docker, which tells us that running more than one process in Docker IS VALID .. so even if someone claims to be more skilled regarding Docker, and insists that you absurd for thinking of doing this, know that you are not. There are perfectly valid situations for doing so.
Good to know:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
http://www.techbar.me/stopping-docker-containers-gracefully/
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
https://github.com/phusion/baseimage-docker#docker_single_process
The official docker answer to Run multiple services in a container.
It explains how you can do it with an init system (systemd, sysvinit, upstart) , a script (CMD ./my_wrapper_script.sh) or a supervisor like supervisord.
The && workaround can work only for services that starts in background (daemons) or that will execute quickly without interaction and release the prompt. Doing this with an interactive service (that keeps the prompt) and only the first service will start.
To address why CMD is designed to run only one service per container, let's just realize what would happen if the secondary servers run in the same container are not trivial / auxiliary but "major" (e.g. storage bundled with the frontend app). For starters, it would break down several important containerization features such as horizontal (auto-)scaling and rescheduling between nodes, both of which assume there is only one application (source of CPU load) per container. Then there is the issue of vulnerabilities - more servers exposed in a container means more frequent patching of CVEs...
So let's admit that it is a 'nudge' from Docker (and Kubernetes/Openshift) designers towards good practices and we should not reinvent workarounds (SSH is not necessary - we have docker exec / kubectl exec / oc rsh designed to replace it).
More info
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container

Allowing user www-data (apache) to call a python script that requires root privileges from a CGI script

The python script script.py is located in /usr/bin/monitor/scripts and it's main function is to use subprocess.check_call() and subprocess.check_output() to call various administrative tools (both c programs located in /usr/bin/monitor/ created specifically for the machine, and linux executables in /sbin like fdisk -l and df -h). It was written to run as root and print output from these programs in a useful way to the command line.
My project is to make the output from this script viewable through a webpage. I'm on a Beaglebone Black using Apache2, which executes files as user www-data from its DocumentRoot, /var/www/html/. The webpage is set up like this:
index.html uses an iframe to display the output of a python CGI script which is also located in /var/www/html/
script.cgi attempts to call/display output from script.py output using the subprocess module
The problem is that script.py is being called just fine, but each of the calls within script.py fail and return script.py's error messages because I presume they need to be run as root when apache is running them as user www-data.
To try to get around this, I created a new group called bbb, added www-data to the group, then ran chown :bbb script.py to change its group to bbb. Unfortunately it was still causing the same problems, so I tried changing permissions from 755 to 775, which didn’t work either. I tried running chown :bbb * on the files/programs that script.py uses, also to no avail. Also, some of the executables script.py uses are in /sbin and I am cautious to just give it blanket root access to directories like this.
Since my attempts at fixing ownership issues felt a bit like 1000 monkey code, I created new version of the script in which I create a list of html output, and after each print statement in the original code, I append the same line of text as a string with html tags to the html output list, then at the end of the script (in whatami) I have it create and write to a .txt file in /var/www/html/, and call os.chmod("/var/www/html/data.txt", 0o755) to give apache access. The CGI then calls subprocess.check_call() on script.py, then opens, reads, and prints each line with html formatting to the iframe in the webpage. This attempt at least resulted in accurate output but... it only updates when it is run in terminal as root, rather than re-running script.py ever time the page is refreshed, which kind of undermines the point of the webpage. I assume this means the subprocess check_call in the CGI script is not working correctly, but for some reason, the subprocess call itself doesn’t throw any errors or indications of failure, yet the text file returns without being updated. Even with the subprocess call in a “try” block succeeded by a “print(‘call successful’)”, it returns the success message and then the not updated text file.
I’m a bit at a loss trying to figure out how to just force the script to run and do it’s thing in the background so that the file will update without just giving apache root access. I've read a few things about either wrapping the python script in a shell that causes it to be run as root, or to change sudoers to give www-data sudo priviledges, but I do not want to introduce security issues or make what was intended to be a simple script allowing output to a webpage to become more convoluted than it already has. Any advice or direction would be greatly appreciated.
Best way IMO would be to "decouple" execution, by creating a localhost-only service which you "call" from the apache process by connecting via a local socket.
E.g. if using systemd:
Create: /etc/systemd/system/my-svc.socket
[Unit]
Description=My svc socket
[Socket]
ListenStream=127.0.0.1:1234
Accept=yes
[Install]
WantedBy=sockets.target
Create: /etc/systemd/system/my-svc#.service
[Unit]
Description=My Service
Requires=my-svc.socket
[Service]
Type=simple
ExecStart=/opt/my-service/script.sh %i
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
Create /opt/my-service/script.sh:
#!/bin/sh
echo "id=$(id)"
echo "args=$*"
Finish setup with:
$ sudo chmod +x /opt/my-service/script.sh
$ sudo systemctl daemon-reload
Try it out:
$ nc 127.0.0.1 1234
id=uid=0(root) gid=0(root) groups=0(root)
args=55-127.0.0.1:1234-127.0.0.1:32938
Then from your cgi, you'll need to do the equivalent of the nc command above (just a tcp connection).
--jjo

Difference between Daemon and Upscript for Gunicorn in Django Production

I am deploying a Django site in production and now from a week I couldn't get Gunicorn script in /etc/init/project.conf to bind Nginx no matter what I do inside a Django virtual environment and under newly created user djagno at location /home/Django/project/bin/gunicorn. I need to know that can I run a site in production with daemon. I understand that daemon is simply a background process and not attached to any tty. But with creating a pid with running a command from inside a virtualenv like "gunicorn --bind 127.0.0.1:9500 project.wsgi:application --config=/etc/gunicorn.d/gunicorn.py --name=project -p /tmp/project.pid" wouldn't it act as a service? My project without virtual environment is working just fine but not with virtual environment. I am learning Linux so need an expert advise. Can I launch a project like this?
My upstart script I couldn't attach within virtualenv is given below.
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /home/django
exec gunicorn \
--name=project\
--pythonpath=project\
--bind=127.0.0.1:9500 \
--config /etc/gunicorn.d/gunicorn.py \
project.wsgi:application
If someone can help me to make change in it according to virtualenv I would be thankful. Again...same settings for my project without virtualenv are working just fine but not for my second website where the only difference is that I am running first project without virtualenv and second one is from virtualenv.
The point is that if you just run it like that, you don't have anything responsible for ensuring it remains up: if the process dies, or if you have to restart your server, you will have to re-run that command manually. That's what upstart, or supervisor, will do for you: monitor that it is indeed running, and bring it back if it isn't.
If you want help with debugging your upstart script, you will need to actually post it, plus any errors from the log.

How do I run Django as a service?

I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv

Categories

Resources