I use gunicorn to deploy project Django background.
python2.7 manage.py run_gunicorn 0.0.0.0:8090
It is not run background.
gunicorn_django -b 0.0.0.0:8090
It doesn't see the my apps.
The project ran successfully when python manage.py runserver
To run gunicorn in background you will need to use a process control system like Supervisord to manage gunicorn.
Deployment instructions with Supervisor and/or Runit are described here
For the part of problem where the apps are not detected, did you add gunicorn to your INSTALLED_APPS setting in django settings.py? If not integration is decribed here
Edit:
Sample gunicorn management script for supervisor
#!/bin/bash
set -e
LOGFILE=/home/ubuntu/logs/gunicorn/x.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=3
HOST=0.0.0.0:8000
# user/group to run as
USER=ubuntu
GROUP=ubuntu
cd ~/webapps/Z/
. ~/virtualenvs/production/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec ~/virtualenvs/production/bin/gunicorn_django -b $HOST -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
Related
I want to test performance of WEB service that running inside AWS ECS service depending on number of gunicorn workers.
Entrypoint of the container is:
WORKERS=15
THREADS=15
gunicorn \
--reload \
--workers "${WORKERS}" \
--threads "${THREADS}" \
--max-requests 10000 \
--max-requests-jitter 200 \
--timeout 60 \
--access-logfile - \
--error-logfile - \
--bind 0.0.0.0:8000 \
config.wsgi:application
The problem:
If I want to change number of workers / threads I have to stop gunicorn → update ECS Task definition (set new number of WORKERS and THREADS) → restart ECS container. It takes too much time if I want to test tens of configurations.
Possible workaround:
It is possible to set mock endless entrypoint like watch -n 1000 "ls -l" and login to ECS container with bash and run gunicorn with desired parameters manually. But it is little bit inconvenient and suppose to create test specific environment. So, I want to avoid this method.
The question:
Is it possible to change number of workers and threads of already running gunicorn instance? To be able test different configurations without rerunning container and stopping its entrypoint process.
You could use config file and reload the config by sending HUP signal to the gunicorn master process.
See: reload configuration
Here is a simple example:
# Dockerfile
FROM python:3.9-slim
COPY . /app
RUN pip install --no-cache-dir -r /app/requirements.txt
WORKDIR /app
ENTRYPOINT ["gunicorn", "app:app"]
# tree ./
.
├── app.py
├── Dockerfile
├── gunicorn.conf.py
└── requirements.txt
# cat gunicorn.conf.py
workers = 2
threads = 2
loglevel = 'debug'
errorlog = '-'
accesslog = '-'
pidfile = '/var/run/app.pid'
# reload the configuration by
kill -HUP $(cat /var/run/app.pid)
I have a Django application running in Heroku. On the initial deployment, I manually migrated the database schema using heroku run.
The next time I needed to push migrations to the app, the release went off without a complaint.
However, when I went to the page to see it live, I was returned a programming error: the new column didn't exist. The migrations had never been run.
Here's my Procfile:
web: gunicorn APP_NAME.wsgi --log-file -
release: python manage.py migrate
release: python manage.py collectstatic --noinput
worker: celery worker -A APP_NAME -B -E -l info
The collectstatic release is run successfully, but the migrate release is seemingly ignored or overlooked. When I manually migrated, they migrated without error. There is an empty __init__.py file in the migrations folder.
If anyone knows what could possibly be hindering the migrate release from running, that would be awesome.
Okay, so I've figured it out. Although in its documentation Heroku seems to imply that there can be more than one release tag in a Procfile, this is untrue.
The last release tag in the Procfile takes precedent.
This means that in order to run multiple commands in the release stage, you have to use a shell script.
Now, my Procfile looks like this:
web: gunicorn APP_NAME.wsgi --log-file -
release: ./release.sh
worker: celery worker -A APP_NAME -B -E -l info
And I have a release.sh script that looks like this:
python manage.py migrate
python manage.py collectstatic --no-input
MAKE SURE TO MAKE YOUR RELEASE.SH SCRIPT EXECUTABLE:
Running chmod u+x release.sh in terminal prior to committing should do the trick.
As I cannot comment #rchurch4's answer, here it is: if you just have a few commands to run at release time, you can use the following in your Procfile:
release: command1 && command2 && command3 [etc.]
for instance
release: python manage.py migrate && python manage.py loaddata foo && python manage.py your_custom_management_command
I am running a virtualenv with Python3.6 on Ubuntu 16.04 for my Django project using uwsgi and NGINX.
I have uWSGI installed globally and also in the virtualenv.
I can run my project from the command line using uWSGI within the env with
/home/user/Env/myproject/bin/uwsgi --http :8080 --home /home/user/Env/myproject --chdir /home/user/myproject/src/myproject -w myproject.wsgi
and go to my domain and it loads fine.
However I am obviously running uWSGI in "Emperor mode" and when I set the service file up (along with NGINX) the domain displays internal server error.
The uWSGI logs trace to --- no python application found ---
I was having this problem when running
uwsgi --http :8080 --home /home/user/Env/myproject --chdir /home/user/myproject/src/myproject -w myproject.wsgi
because it was using the global install uwsgi instead of the virtualenv one.
I changed my StartExec to the virtualenv uwsgi path but no luck.
I can't figure out what I'm doing wrong, path error? Syntax error?
my /etc/systemd/system/uwsgi.service file
[Unit]
Description=uWSGI Emperor service
[Service]
ExecStartPre=/bin/bash -c 'mkdir -p /run/uwsgi; chown user:www-data /run/uwsgi'
ExecStart=/home/user/Env/myproject/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
Okay bit silly but it seems I ran sudo systemctl stop uwsgi and then sudo systemctl start uwsgi and it works now.
I use Django-Q for task queue and scheduler. I need to keep running the command: python manage.py qcluster.
How can I do it with Systemd?
I've found this code for .service file but I don't know how to use my Virtualenv for python path:
[Unit]
Description=Async tasks runner
After=network.target remote-fs.target
[Service]
ExecStart=/usr/bin/django-admin qcluster --pythonpath /path/to/project --settings settings
User=apache
Restart=always
[Install]
WantedBy=multi-user.target
Use the django-admin binary installed in your virtualenv's bin directory, or the python binary there to run manage.py within your project's working directory:
ExecStart=/path/to/my-venv/bin/django-admin qcluster --pythonpath /path/to/project --settings settings
or
ExecStart=/path/to/my-venv/bin/python manage.py qcluster --pythonpath /path/to/project --settings settings
RootDirectory=/path/to/project
For those who still have problem with this, Just follow these steps:
create a service, example:
sudo nano /etc/systemd/system/qcluster.service
Edit service as follows:
[Unit]
Description=qcluster runner
After=network.target
[Service]
User=user
WorkingDirectory=/home/user/path_to_project
ExecStart=/home/user/path_to_project_env/bin/python manage.py qcluster
[Install]
WantedBy=multi-user.target
Enable the service:
sudo systemctl enable qcluster.service
Start the service:
sudo systemctl start qcluster.service
celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
celery -A app worker -Q default -B -l debug --purge -n default_worker
celery -A app beat -l info
As of now we are running the three commands in screens. What is the more production way of running these commands?
The easiest way to create daemons is with supervisord. sentry, which also uses django and celery recommends using supervisord to run the workers - you can adjust the configuration to suit your set up:
[program:celery-priority-high]
directory=/www/my_app/
command=/path/to/celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=syslog
stderr_logfile=syslog
You can, of course, also run django itself using this method.
If supervisord is too much bloat for your needs, you can also create init scripts for your init system of choice (eg. systemd).