How can I configure celery to run on startup of nginx? - python

I have celery running locally by just running celery -A proj -l info (although I don't even know if I should be using this command in production), and I want to get celery running on my production web server every time nginx starts. The init system is systemd

Create a service file like this celery.service
[Unit]
Description=celery service
After=network.target
[Service]
PIDFile=/run/celery/pid
User=celery
Group=celery
RuntimeDirectory=/path/to/project
WorkingDirectory=/path/to/project
ExecStart=celery -A proj -l info
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-abort
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Move the file to /etc/systemd/system/ and next time your restart server, celery will be started by systemd on boot.

Related

How to use Systemd for Django-q daemon

I use Django-Q for task queue and scheduler. I need to keep running the command: python manage.py qcluster.
How can I do it with Systemd?
I've found this code for .service file but I don't know how to use my Virtualenv for python path:
[Unit]
Description=Async tasks runner
After=network.target remote-fs.target
[Service]
ExecStart=/usr/bin/django-admin qcluster --pythonpath /path/to/project --settings settings
User=apache
Restart=always
[Install]
WantedBy=multi-user.target
Use the django-admin binary installed in your virtualenv's bin directory, or the python binary there to run manage.py within your project's working directory:
ExecStart=/path/to/my-venv/bin/django-admin qcluster --pythonpath /path/to/project --settings settings
or
ExecStart=/path/to/my-venv/bin/python manage.py qcluster --pythonpath /path/to/project --settings settings
RootDirectory=/path/to/project
For those who still have problem with this, Just follow these steps:
create a service, example:
sudo nano /etc/systemd/system/qcluster.service
Edit service as follows:
[Unit]
Description=qcluster runner
After=network.target
[Service]
User=user
WorkingDirectory=/home/user/path_to_project
ExecStart=/home/user/path_to_project_env/bin/python manage.py qcluster
[Install]
WantedBy=multi-user.target
Enable the service:
sudo systemctl enable qcluster.service
Start the service:
sudo systemctl start qcluster.service

How to get celery to work with SCL python and systemd?

I am having a Python application that uses celery for background tasks. I use Python interpreter provided by SCL.
I was able to create a systemd unit file for teh app: How to use user's pipenv via systemd? Python is installed via SCL
But I do not understand how to put a similar sytstemd unit file for celery.
I tried:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=forking
RestartSec=10
Restart=always
Environment="APP_SITE_SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker
[Install]
WantedBy=multi-user.target
When I start the systemd unit, in the journal I see that the celery app starts. After a few seconds, the service fails.
Job for app_celery.service failed because a timeout was exceeded. See "systemctl status app_celery.service" and "journalctl -xe" for details.
Here's a journal entry:
Jul 17 07:43:31 some.host.com scl[5181]: worker: Cold shutdown (MainProcess)
I tried with Type=oneshot and Type=simple too. None of them worked. I suspect this has something to do with SCL.
Is there a way to get the celery app to work with SCL and systemd?
Celery has a command line option --detach. When you use --detach Celery starts workers as a background process.
Here's the working systemd unit file:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=simple
RemainAfterExit=yes
RestartSec=10
Restart=always
Environment="SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app/
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
LimitMEMLOCK=infinity
LimitNOFILE=20480
LimitNPROC=8192
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker --detach
[Install]
WantedBy=multi-user.target

Systemd Execstart

I have a Python script which I want to run as a daemon. I am doing that by creating a unit file /etc/systemd/system/service
and I want to run it as systemctl start/stop/restart myservice
and depending on this start stop arguments I am handling system signals like SIGUP,SIGINT
Problem is i can run my script as Python main.py start/stop/restart and my logic works.
But after converting into a unit file this python file is invoked by ExecStart and I don't know how to pass arguments there?
[Unit]
Description=This service monitors docker daemon for events
After=multi-user.target
[Service]
Type=simple
ExecStart=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/main.py
User=root
WorkingDirectory=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/
Restart=on-failure
[Install]
WantedBy=multi-user.target
Isn't it that you are actually running "main.py start" and "main.py stop"? In that case you have programmed a "forking" service.
[Unit]
Description=This service monitors docker daemon for events
After=multi-user.target
[Service]
Type=forking
Environment=script=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/main.py
ExecStart=$script start
ExecStop=$script stop
User=root
WorkingDirectory=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/
Restart=on-failure
[Install]
WantedBy=multi-user.target

Reload systemd/gunicorn/flask worker threads sequentially for config changes

I need to force an app reload to pick up config changes. I'm using systemd to kick of gunicorn that runs a flask app.
I pick up the config changes in a /var/run/xx.conf file, which is watched by a systemd/service/app.path:
[Path]
PathChanged=/var/run/app.conf
[Unit]
Description=app-restart
and a corresponding app.system:
[Unit]
Description=app-restart
After=network.target
[Service]
Type=oneshot
PIDFile=/run/app-restart/pid
User=root
Group=root
ExecStart=/usr/bin/app-reload.py
PrivateTmp=false
EnvironmentFile=-/etc/environment
[Install]
WantedBy=multi-user.target
The question is - how to gracefully terminate each gunicorn flask worker thread?
The app's systemd service uses:
[Unit]
Description=app gunicorn daemon
After=network.target
[Service]
PIDFile=/run/app/pid
User=ubuntu
Group=www-data
WorkingDirectory=/opt/app
ExecStart=/usr/local/bin/gunicorn --bind unix:/var/tmp/app.sock -m 007 --workers=2 -t 400 --backlog 2048 --log-config=/etc/app/log.cfg --log-level=DEBUG app
Restart=always
RestartSec=15
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=false
EnvironmentFile=-/etc/environment
[Install]
WantedBy=multi-user.target
so that I can kill off gunicorn worker threads and they will be restarted.
I would like to avoid killing off threads that are in the process of handling repsonses.
Ideally, after each response if handled, the thread would check whether it needs to quit or not. I know I can do this in each and every flask API method, but is there a better way of transitioning all worker threads to the new config one at a time?
I don't want to sysctl restart app.service as that kills off all threads and creates a dead time where there are no workers active.
I want each tread to terminate and reload independently so there's always some live workers.
Some options I've considered:
get a list of app pids with app-reload.py and kill them off one at a time. Gunicorn will restart each as it dies. May terminate an in-progress request.
After each event handled, check for existance of file, that is created by app-reload.py, and if present, terminate.
I'm assuming there must be a way to migrate workers from one config to the other without creating a service dead spot.
RTM wins again.
http://docs.gunicorn.org/en/stable/signals.html
HUP: Reload the configuration, start the new worker processes with a
new configuration and gracefully shutdown older workers. If the
application is not preloaded (using the preload_app option), Gunicorn
will also load the new version of it.
So the end result is just this in the systemd app.path:
[Unit]
Description=app-restart
After=network.target
[Service]
Type=oneshot
User=root
Group=root
ExecStart=/bin/kill -s HUP $(cat /run/app/pid)
PrivateTmp=false
Alternately, one could also rely on the fact that the app.system contains an ExecReload line and use:
ExecReload=/bin/systemctl reload app

How to debug a Python script that segfaults when run as a systemd?

This is driving me nuts.
A Flask app works fine if i personally run uWSGI from the CLI:
uwsgi --emperor /etc/uwsgi/emperor.ini
but when trying to start it as a service with systemd, there is a segfault and the resulting coredump says almost nothing:
sudo systemctl start emperor.uwsgi
coredump:
[New LWP 7639]
Core was generated by `/usr/bin/uwsgi --ini website.ini'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000123c in ?? ()
So, I have no idea how to get more detailed information. This is not a script I'm running with Python app.py, but is a script being served by uWSGI.
I'm clueless and would appreciate any advice.
Thanks.
Edit I - systemd init script:
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini
ExecReload=/bin/kill -HUP $MAINPID
ExecStop=/bin/kill -INT $MAINPID
Restart=always
Type=notify
StandardError=syslog
NotifyAccess=all
KillSignal=SIGQUIT
[Install]
WantedBy=multi-user.target
If i run /usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini manually, works fine.

Categories

Resources