set up pipenv with supervisor - python

I want to deploy dev server but I have a problem with starting celery and gunicorn. I'm using scripts for my purposes
celery.sh
#!/bin/bash
cd /home/dev/app
pipenv run celery -A config worker -B -l info
and start.sh for gunicorn
#!/bin/bash
cd /home/dev/app
pipenv run gunicorn config.wsgi:application -b 127.0.0.1:8005 -w 2 -t 60 \
--env DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE \
--env DSN=$SENTRY_DSN \
--env DATABASE_URL=$DATABASE_URL \
--log-file - \
--error-logfile /home/dev/app/errors.log
Also here is my config for supervisor
[program:back]
directory=/home/dev/app/
command=/home/dev/bin/start
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=/home/dev/bin/celery
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
When I'm running sudo supervisorctl start celery I'm getting the following error:
/home/dev/bin/celery: line 3: pipenv: command not found
Also I added the following lines as pipenv documentation says (https://pipenv.readthedocs.io/en/latest/diagnose/)
[supervisord]
environment=LC_ALL='en_US.UTF-8',LANG='en_US.UTF-8'
UPDATE
Changed my supervisor config:
[program:back]
directory=/home/dev/app/
command=pipenv run gunicorn config.wsgi:application --bind 127.0.0.1:8005
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=pipenv run celery -A config:celery_app worker -B -l info
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
And now I'm getting an error:
back: ERROR (no such file)

You need to give explicit path of gunicorn. Though I'm not sure about pipenv, but the error you are getting is because supervisor tries to find gunicorn in the directory. You should change your config file into something like this:
[program:back]
directory=/home/dev/app/
command=/path/to/pipenv run /path/to/gunicorn config.wsgi:application --bind 127.0.0.1:8005
Then you must restart your supervisord in order to load the settings.
sudo service supervisord reload

in your config file. change command= to bash -c followed by the full path and file to execute
this should do the trick

Related

Using nice command on gunicorn service workers

I have a service:
[Unit]
Description=tweetsift
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/var/www/html
ExecStart=sudo /usr/bin/nice -n -20 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
Restart=on-failure
[Install]
WantedBy=multi-user.target
When I run sudo systemctl status tweet I see that I am using /usr/bin/nice for the main PID. However it is not taking on the workers.
tweetsift.service - tweet
Loaded: loaded (/etc/systemd/system/tweet.service; enabled; preset: enabled)
Active: active (running) since Mon 2023-01-09 04:36:08 UTC; 5min ago
Main PID: 3124 (sudo)
Tasks: 12 (limit: 4661)
Memory: 702.8M
CPU: 7.580s
CGroup: /system.slice/tweet.service
├─3124 sudo /usr/bin/nice -n -20 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3125 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3126 sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3127 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3128 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3129 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3130 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
└─3131 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
I am running a machine learning script that sucks down the CPU. I tried using nice python3 tweet.py and it works and doesn't kill the process.
However, when I try call the api endpoint I built. The service starts up using a worker and then gets killed for OOM (Out of Memory).
I am using Ubuntu 20.04 & Apache2
Any ideas? I was able to get nice running on the main PID by updating the /etc/sudoers/ and adding a line to allow sudo to use it.
But I still can't get the script to run as a service using nice for the workers PIDs too when they start up upon an API call to the flask app I've got running.
I am using gunicorn (version 20.1.0)
Thanks!
I've tried everything at this point. I want nice to be applied to gunicorn workers when my flask app has an api call sent to it without getting killed for OOM.
I'm using a 4GB Intel 80GB Disk premium intel droplet on DigitalOcean.

How can I configure celery to run on startup of nginx?

I have celery running locally by just running celery -A proj -l info (although I don't even know if I should be using this command in production), and I want to get celery running on my production web server every time nginx starts. The init system is systemd
Create a service file like this celery.service
[Unit]
Description=celery service
After=network.target
[Service]
PIDFile=/run/celery/pid
User=celery
Group=celery
RuntimeDirectory=/path/to/project
WorkingDirectory=/path/to/project
ExecStart=celery -A proj -l info
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-abort
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Move the file to /etc/systemd/system/ and next time your restart server, celery will be started by systemd on boot.

How do we run below celery commands in production server?

celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
celery -A app worker -Q default -B -l debug --purge -n default_worker
celery -A app beat -l info
As of now we are running the three commands in screens. What is the more production way of running these commands?
The easiest way to create daemons is with supervisord. sentry, which also uses django and celery recommends using supervisord to run the workers - you can adjust the configuration to suit your set up:
[program:celery-priority-high]
directory=/www/my_app/
command=/path/to/celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=syslog
stderr_logfile=syslog
You can, of course, also run django itself using this method.
If supervisord is too much bloat for your needs, you can also create init scripts for your init system of choice (eg. systemd).

getting an error Unknown command: 'celery' with celery and supervisor

i am trying to setup celery with supervisord on AWS instance. I have the project up running on a virtual environment and i tested the celery by "python2.7 manage.py celery worker". But i am getting an error Unknown command: 'celery' while running with supervisor.
My supervisor configuration are below
1)
[program:celery-eatio]
environment = PYTHONPATH="/home/ubuntu/eatio_env/lib/python2.7:/home/ubuntu/eatio-server:/home/ubuntu/eatio_env/lib/python2.7/site-packages:$PYTHONPATH",DJANGO_SETTINGS_MODULE="eatio_web.settings"
command=/home/ubuntu/eatio-server/manage.py celery worker
user=ubuntu
numprocs=1
stdout_logfile=/home/ubuntu/celery_error.log
stderr_logfile=/home/ubuntu/celery_out.log
autostart=true
autorestart=true
2)
[program:celery-eatio]
command=/home/ubuntu/eatio_env/bin/python2.7 /home/ubuntu/eatio-server/manage.py celery worker --loglevel=info -c 1 -E -B -Q celery_periodic -n periodic_worker
directory=/home/ubuntu/eatio-server
user=ubuntu
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/celery_error.log
stderr_logfile=/home/ubuntu/celery_out.log

How to run background the project Django, use gunicorn?

I use gunicorn to deploy project Django background.
python2.7 manage.py run_gunicorn 0.0.0.0:8090
It is not run background.
gunicorn_django -b 0.0.0.0:8090
It doesn't see the my apps.
The project ran successfully when python manage.py runserver
To run gunicorn in background you will need to use a process control system like Supervisord to manage gunicorn.
Deployment instructions with Supervisor and/or Runit are described here
For the part of problem where the apps are not detected, did you add gunicorn to your INSTALLED_APPS setting in django settings.py? If not integration is decribed here
Edit:
Sample gunicorn management script for supervisor
#!/bin/bash
set -e
LOGFILE=/home/ubuntu/logs/gunicorn/x.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=3
HOST=0.0.0.0:8000
# user/group to run as
USER=ubuntu
GROUP=ubuntu
cd ~/webapps/Z/
. ~/virtualenvs/production/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec ~/virtualenvs/production/bin/gunicorn_django -b $HOST -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE

Categories

Resources