When there are simultaneous ajax request sent to runserver its gets killed.
I know earlier it was single threaded, but --nothreading option says it is now multithreaded by default. Still, my runserver gets killed.
I am running on django==1.10 and python==2.7
How do I stop runsever from getting killed?
Or This is because of python's multithreading limitations?
Out of trial and error, I found the way to stop runserver from getting killed.
--nothreading actually solved my problem automagically.
so final commands to run dev server is.
django-admin runserver 1.2.3.4:8000 --nothreading
or
python manage runserver 1.2.3.4:8000 --nothreading
Happy I am :-)
Related
This is a funny stackover flow question, because I have an answer, but the answer is a few years old. I can't find much content which is new, yet it seems like it would be quite high profile.
I am using docker-compose to start a few containers. Two of them use standard postgres and redis images.
The others are django 2.2.9 (and celery) This is a development environment, and I start them with docker compose, like this:
command: ./manage.py runserver 0.0.0.0:80
docker-compose stop sends a SIGINT. The redis and postgres containers exit quickly.
the django containers don't. docker-compose stop loses patience and kills them.
(and pycharm has infinite patience currently, and doesn't send a kill until I force it).
This post from 2015 referring to Django 1.9 (http://blog.lotech.org/fix-djangos-runserver-when-run-under-docker-or-pycharm.html) says that
"The quick fix is to specifically listen for SIGINT and SIGTERM in
your manage.py, and sys.kill() when you get them. So modify your
manage.py to add a signal handler:"
and it says how. The fix to change manage.py to catch SIGINT works and it's a handful of lines, although it doesn't work for celery which has its own startup.
So I can carry forward my own version of of manage.py and fix celery, but really is this still how to fix this?
I see the the dockerfile could have
STOPSIGNAL SIGINT
but it doesn't make and difference, I suppose because the entry point is managed by docker-compose.
Use the list variant of command:
command: ["./manage.py", "runserver", "0.0.0.0:80"]
See https://hynek.me/articles/docker-signals/ for details why.
I have a Django AWS server that I need to keep running over the weekend for it to be graded. I typically start it from an SSH using PuTTY with:
python manage.py runserver 0.0.0.0:8000
I was originally thinking of making a bash script to do the task of starting the server, monitoring it, and restarting it when needed using the but was told it wasn't going to work. Why?
1) Start the server using python manage.py runserver 0.0.0.0:8000 & to send it to the background
2) After <some integer length 'x'> minutes of sleeping, check if the server isn't up using ss -tulw and grep the result for the port the server should be running on.
3) Based on the result from step (2), we either need to sleep for 'x' minutes again, or restart the server (and possibly fully-stop anything left running beforehand).
Originally, I thought it was a pretty decent idea, as we can't always be monitoring the server.
EDIT: Checked that ss -tulw | grep 8000 correctly grabs the server while running server:
if I understand you correctly, this is a non-production Django app. You could run a test server using Django's development server like python manage.py runserver 0.0.0.0:8000 as you did.
Thinks like monit (https://mmonit.com/monit/) or supervisord (http://supervisord.org/) are meant to do what you described - monitoring a process and restart it if necessary, but you could also just use a cron job that runs perhaps every minute. In the cron job, you:
Check whether your process is still running and or still listening on port 8000.
Abort if already running.
Restart if stopped or not listening to port 8000.
I haven't yet been able to get Apache working with my Django app, so until I do get it working, I'm using runserver on my Linux server in order to demo the app. The problem is that whenever I close the SSH connection to the server, runserver stops running. How can I keep runserver running when I say put my laptop to sleep, or lose internet connectivity?
P.S. I'm aware that runserver isn't intended for production.
Using Screen
You can run runserver in a screen session. After you detach from that session it will keep running. Login via ssh and start a screen session via
screen
It will look like your usual terminal. Now, run the server
python manage.py runserver 8080
After this, you can detach from the session using Ctrl+a Ctrl+d. Now your app should be available even after you quit your ssh session.
If you want to cancel the runserver, you can re-atach to your screen session. Get a list of existing sessions with
screen -ls
There is a screen on:
10989.pts-1.hostname (Detached)
1 Socket in /run/screens/S-username.
Then, you can reatach with the command
screen -R 10989
Using nohup
Again, after login into your server, start the runserver with
nohup python manage.py runserver 8080 &
All output the runserver writes (like debug info and so on) will be written to a file called nohup.out in the same folder.
To quit the server after using nohup you need to keep the process id (pid) shown or find the pid afterwards with ps, top or any other tool.
Since runserver isn't intended to be ran in production/only for development there is no way to to this built in.
You will need to use a tool like tmux/screen or nohup to keep the process alive, even if the spawning terminal closes.
$ nohup python manage.py runserver &
nohup makes command ignore hangup signal and & puts in to background disconnecting from stdout
If you are SSH'ing through to your server and starting Django there, consider use of a program on the server such as tmux. This will allow your server-side shell process to remain alive after disconnection, and you can reattach on your next login with a simple
tmux attach
command.
I'm using celery 3.0.11 and djcelery 3.0.11 with python 2.7 and django 1.3.4.
I'm trying to run celeryd as a daemon and I've followed instructions from http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html
When I run the workers using celeryd as described in the link with a python (non-django) configuration, the daemon comes up.
When I run the workers using python manage.py celery worker --loglevel=info to test the workers, they come up fine and start to consume messages.
But when I run celeryd with a django configuration i.e. using manage.py celeryd_multi, I just get a message that says
> Starting nodes...
> <node_name>.<user_name>: OK
But I don't see any daemon running and my messages obviously don't get consumed. There is an empty log file (the one that's configured in the celeryd config file).
I've tried this with a very basic django project as well and I get the same result.
I'm wondering if I'm missing any basic configuration piece. Since I don't get any errors and I don't have any logs, I'm stuck. Running it with sh-x doesn't show anything special either.
Has anyone experienced this before or does anyone have any suggestions on what I can try?
Thanks,
For now I've switched to using supervisord instead of celeryd and I have no issues running multiple workers.
I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv