django in docker not detecting SIGINT - python

This is a funny stackover flow question, because I have an answer, but the answer is a few years old. I can't find much content which is new, yet it seems like it would be quite high profile.
I am using docker-compose to start a few containers. Two of them use standard postgres and redis images.
The others are django 2.2.9 (and celery) This is a development environment, and I start them with docker compose, like this:
command: ./manage.py runserver 0.0.0.0:80
docker-compose stop sends a SIGINT. The redis and postgres containers exit quickly.
the django containers don't. docker-compose stop loses patience and kills them.
(and pycharm has infinite patience currently, and doesn't send a kill until I force it).
This post from 2015 referring to Django 1.9 (http://blog.lotech.org/fix-djangos-runserver-when-run-under-docker-or-pycharm.html) says that
"The quick fix is to specifically listen for SIGINT and SIGTERM in
your manage.py, and sys.kill() when you get them. So modify your
manage.py to add a signal handler:"
and it says how. The fix to change manage.py to catch SIGINT works and it's a handful of lines, although it doesn't work for celery which has its own startup.
So I can carry forward my own version of of manage.py and fix celery, but really is this still how to fix this?
I see the the dockerfile could have
STOPSIGNAL SIGINT
but it doesn't make and difference, I suppose because the entry point is managed by docker-compose.

Use the list variant of command:
command: ["./manage.py", "runserver", "0.0.0.0:80"]
See https://hynek.me/articles/docker-signals/ for details why.

Related

Git bash "Watching for file changes with StatReloader" stuck and never loads

I have setup a Django project on a virtual environment on my PC. When using the command
python manage.py runserver 0.0.0.0:8000
Bit Bash stops doing anything and I have to end the program to start over. I have waited several minutes and when I end the session, a dialogue says:
Processes are running in session:
WPID PID COMMAND
14904 1534 c:\Users\mine\AppData\Loca
Close anyway?
I have looked at every related question to this and tried every solution but I cannot get this to work, on or off the virtual environment.
Not sure if this applies, but I also noticed that in my task manager, python3.9.exe appears twice when trying to start the server. The status says running, and the PIDs are different numbers.
This is because something already running on port 8000. You can run the Django server by hosting on another port like 8080 but, you can also kill the already running task on the port. Follow the link to know how exactly you can kill any running process.
It is highly likely that another application is running using the port 8000. Try running the server using another port, say 8088 and share if the same issue persists.
To run on specific port kill all pids which is showing you on Git bash
(ps -ef)... kill them using (kill -9 pid_no)... then run your runserver command.
Ex:
ps -ef
kill -9 123
I ended up paying someone to fix this. Here is the conversation I had with the person who found the issue. Hope this helps someone out there.
Also, I noticed sometimes it still does freeze up on the watching for file changes with StateReloader until I go the browser and type in http://localhost:8000/. Once I do that the rest of the text loads and my browser shows The install worked successfully! Congratulations!
it works, what did you do?
- We have existing processes on port 8000.I think, you had the server running and kept trying over and over. So, I killed process using 8000 port.
- Next, I run Django project again.
- So, now it's working well.
ok. Can you tell me what you did at the start to make it work? Or was it always working?
- I started git bash session again when I started. Maybe git bash session had problems.
But this was going on for a day, I restarted the computer several times and restarted git bash. Same issue.
You must have done something.
- Sorry, I didn't do anything except restart git bash
how did you restart it?
- right click top of window, select NEW
How do I turn server off?
- press Ctrl + C
- And you didn't quit the server exactly before. Maybe you closed git bash directly and server was still live
- So, we had many 8000 ports.
Ok. How can I see what is on that port now?
- You can use this cli
- netstat -ano | findstr :<PORT>
- We don't have running 8000 port now.
I have never found solutions to this problem even with killing ports, changing port etc.
It never display the following part:
Starting development server at http://127.0.0.1:8000/
However, when I use the Git Bash terminal include into vscode, it displays the url.
Terminal > New Terminal > then:
Result :
python manage.py runserver 127.0.0.1:8080

How can I run luigid and luigi task within docker? [duplicate]

I have built a base image from Dockerfile named centos+ssh. In centos+ssh's Dockerfile, I use CMD to run ssh service.
Then I want to build a image run other service named rabbitmq,the Dockerfile:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD /opt/mq/sbin/rabbitmq-server start
To start rabbitmq container,run:
docker run -d -p 222:22 -p 4149:4149 rabbitmq
but ssh service doesn't work, it sense rabbitmq's Dockerfile CMD override centos's CMD.
How does CMD work inside docker image?
If I want to run multiple service, how to? Using supervisor?
You are right, the second Dockerfile will overwrite the CMD command of the first one. Docker will always run a single command, not more. So at the end of your Dockerfile, you can specify one command to run. Not more.
But you can execute both commands in one line:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD service sshd start && /opt/mq/sbin/rabbitmq-server start
What you could also do to make your Dockerfile a little bit cleaner, you could put your CMD commands to an extra file:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD sh /home/centos/all_your_commands.sh
And a file like this:
service sshd start &
/opt/mq/sbin/rabbitmq-server start
Even though CMD is written down in the Dockerfile, it really is runtime information. Just like EXPOSE, but contrary to e.g. RUN and ADD. By this, I mean that you can override it later, in an extending Dockerfile, or simple in your run command, which is what you are experiencing. At all times, there can be only one CMD.
If you want to run multiple services, I indeed would use supervisor. You can make a supervisor configuration file for each service, ADD these in a directory, and run the supervisor with supervisord -c /etc/supervisor to point to a supervisor configuration file which loads all your services and looks like
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
If you would like more details, I wrote a blog on this subject here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
While I respect the answer from qkrijger explaining how you can work around this issue I think there is a lot more we can learn about what's going on here ...
To actually answer your question of "why" ... I think it would for helpful for you to understand how the docker stop command works and that all processes should be shutdown cleanly to prevent problems when you try to restart them (file corruption etc).
Problem: What if docker did start SSH from it's command and started RabbitMQ from your Docker file? "The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container." Which process is docker tracking as PID 1 that will get the SIGTERM? Will it be SSH or Rabbit?? "According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly, and as a result their containers become filled with zombie processes over time."
Answer: Docker simply takes that last CMD as the one that will get launched as the root process with PID 1 and get the SIGTERM from docker stop.
Suggested solution: You should use (or create) a base image specifically made for running more than one service, such as phusion/baseimage
It should be important to note that tini exists exactly for this reason, and as of Docker 1.13 and up, tini is officially part of Docker, which tells us that running more than one process in Docker IS VALID .. so even if someone claims to be more skilled regarding Docker, and insists that you absurd for thinking of doing this, know that you are not. There are perfectly valid situations for doing so.
Good to know:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
http://www.techbar.me/stopping-docker-containers-gracefully/
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
https://github.com/phusion/baseimage-docker#docker_single_process
The official docker answer to Run multiple services in a container.
It explains how you can do it with an init system (systemd, sysvinit, upstart) , a script (CMD ./my_wrapper_script.sh) or a supervisor like supervisord.
The && workaround can work only for services that starts in background (daemons) or that will execute quickly without interaction and release the prompt. Doing this with an interactive service (that keeps the prompt) and only the first service will start.
To address why CMD is designed to run only one service per container, let's just realize what would happen if the secondary servers run in the same container are not trivial / auxiliary but "major" (e.g. storage bundled with the frontend app). For starters, it would break down several important containerization features such as horizontal (auto-)scaling and rescheduling between nodes, both of which assume there is only one application (source of CPU load) per container. Then there is the issue of vulnerabilities - more servers exposed in a container means more frequent patching of CVEs...
So let's admit that it is a 'nudge' from Docker (and Kubernetes/Openshift) designers towards good practices and we should not reinvent workarounds (SSH is not necessary - we have docker exec / kubectl exec / oc rsh designed to replace it).
More info
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container

How to Make uWSGI die when it encounters an error?

I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this?
More info about my setup:
Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.

How do I run Django as a service?

I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv

Gunicorn not reloading a Django application

I'm getting inconsistent code-reloading behavior, with a Django 1.3 application and gunicorn 0.12.1, running inside a virtualenv.
Gunicorn does not reload my application properly, even with a restart of the specific gunicorn process PID. When I run a basic runserver (through Django, via the manage.py command) this is not an issue.
When I remove and recreate my virtualenv, gunicorn runs as expected with the new code.
Is there a Python cache or something? I also tried to remove all *.pyc files.
Try this:
$ kill -HUP masterpid
Also, have a look at some of the notes at the bottom of the following post.
I ran into variations of this problem as well -- as advised in the article linked to by Mr. Pokomy, killing the gunicorn master process with a HUP signal seems to do the trick.
One can set up auto-reloading on file save easily, if you use the python watchdog module; the setup is actually pretty self-explanatory, so here's a snippet from my development supervisord.conf file:
[program:ost2]
autostart=true
command=/usr/local/share/python/gunicorn --debug\
-c /Users/fish/Dropbox/ost2/ost2/utils/gunicorn/ost2-debug.py wsgi_debug
directory=/Users/fish/Dropbox/ost2/ost2
priority=500
; (etc)
[program:ost2-reloader]
autostart=true
autorestart=false
directory=/tmp
command=/usr/local/share/python/watchmedo shell-command\
--patterns="*.py;*.txt;*.html;*.css;*.less;*.js;*.coffee"\
-R --command='kill -HUP $(cat /usr/local/gunicorn/gunicorn.pid)'\
/Users/fish/Dropbox/ost2/ost2/
priority=996
; (etc)
(N.B. I put the slashes in that sample before newlines that aren't actually in the conf file -- I inserted those newlines for legibility; I am not sure if that works IRL)
The first program is the gunicorn process, which I run in a single thread during development in order to use the Werkzeug debugger. The second part is the interesting bit: that command says, "kill the process specified by the gunicorn PID file whenever there's a change in a file in this directory tree if the file's suffix matches one from this list".
Works like a charm for many including me. If you don't know it, watchdog is very useful and is worth a look, in its own right.

Categories

Resources