We are using celery, rabbitmq and ffmpeg-python to read video streams. In a celery task (shared task), we are calling ffmpeg-python which internally uses subprocess to run ffmpeg. Whenever we revoke tasks in celery, the ffmpeg processes become defunct/zombie. Over a time they start getting accumulated and exhausting our pids. Is there any way to gracefully exit the celery task along with its subprocess?
Does this SO answer help you?
Quote:
from celery import Celery
celery = Celery('vwadaptor', broker='redis://workerdb:6379/0',backend='redis://workerdb:6379/0')
celery.control.broadcast('shutdown', destination=[<celery_worker_name>])
[EDIT]
Alternatively, here is a python module that provides warm and cold shutdown behaviour on Celery. Disclaimer: I haven't used it.
Related
i have celery running on few computers and using flower for monitoring.
the computers is used by different people.
celery beat is generating jobs for all the workers from one of the computer.
every time new coded task is ready, all the workers less the beat-computer will have task not registered exception.
what is the recommended direction to sync all the code to all other computers in the network, is there a prehook kind of mechanism in celery to check for new code?
Unfortunately, you need to update the code on all the workers (nodes) and after that you need to restart all of them. This is by (good) design.
A clever systemd service could in theory be able to
send the graceful shutdown signal
run pip install -U your-project
start the Celery service
This post is in continuation with my previous post - celery how to implement single queue with multiple workers executing in parallel?
I implemented celery to work with eventlet using this command :-
celery -A project worker -P eventlet -l info --concurrency=4
I can see that my tasks are getting moved to active list faster (In flower) but i am not sure if they are executing in parallel? I have a 4 core server for production but I am not utilizing all the cores at the same time.
My question is :-
how can I use all 4 cores to execute tasks in parallel?
Both eventlet/gevent worker types provide great solution for concurrency at the cost of stalling parallelism to 1. To have true parallel task execution and utilise cores, run several Celery instances on same machine.
I know this goes counter to what popular Linux distros have in mind, so just ignore system packages and roll your great configuration from scratch. Systemd service template is your friend.
Another option is to run Celery with prefork pool, you get parallelism at the cost of stalling concurrency to number of workers.
I am using celery in an uncommon way - I am creating custom process when celery is started, this process should be running all the time when celery is running.
Celery workers use this process for their tasks (details not needed).
I run celery from command line and everything is ok:
celery -A celery_jobs.tasks.app worker -B --loglevel=warning
But when I use celeryd to daemonize celery, there is no way to stop it.
Command celeryd stop tries to stop celery but never ends.
When I check process trees in both cases, there is the difference - when running from command line, the parent is obviously celery process (main process which has celery workers as childs). Killing (stopping) the parent celery process will stop all celery workers and my custom process.
But when running with celeryd, my custom process has parent /sbin/init - and calling celeryd stop is not working - seems like main celery process is waiting for something, or is unable to stop my custom process as it is not child process of celery.
I don't know much about processes and it is not easy to find information, because I don't know what I should search for, so any tips are appreciated.
I have had the same problem. I needed a quick solution, so I wrote this bash script
#/bin/bash
/etc/init.d/celeryd stop
sleep 10
export PIDS=`ps -ef | grep celery |grep -v 'grep' | awk '{print $2}'`
for PID in $PIDS; do kill -9 $PID; done;
If the process doesn't stop after 10 seconds, it's a long-time-to-stop candidate, so i decided to stop abruptly
I assume your custom process is not a child of any of your pool worker processes and need not be so.
I use supervisord instead of celeryd to daemonize workers. It can be used to daemonize other processes as well. Such as your custom processes.
In your case your supervisord.conf can have multiple sections. One for each celery worker node and one (or more) for your custom process(es).
When you kill the supervisord process (with -TERM) it will take care of terminating all the workers and your custom process as well. If you use -TERM, then you will need to make sure your custom processes handle them.
We're using celery eta tasks to schedule tasks FAR (like months) in the future.
Now using the rabbitMQ backend because the mongo backend did loose such tasks on a worker restart.
Actually tasks with the rabbitMQ backend seem to be persistent across celery and rabbitMQ restarts, BUT revoke messages seem to be lost on rabbitMQ restarts.
I guess that if revoke messages are lost, those eta tasks that should be killed will execute anyway.
This may be helpful from the documentation (Persistent Revokes):
The list of revoked tasks is in-memory so if all workers restart the
list of revoked ids will also vanish. If you want to preserve this
list between restarts you need to specify a file for these to be
stored in by using the –statedb argument to celery worker:
$ celery -A proj worker -l info --statedb=/var/run/celery/worker.state
I have written an Upstart job to run celery in my Ubuntu server. Here's my configuration file called celeryd.conf
# celeryd - runs the celery daemon
#
# This task is run on startup to run the celery daemon
description "run celery daemon"
start on startup
expect fork
respawn
exec su - trakklr -c "/app/trakklr/src/trakklr celeryd --events --beat --loglevel=debug --settings=production"
When I execute sudo service celeryd start, the celeryd process starts just fine and all the x number of worker process start fine.
..but when I execute, sudo service celeryd stop, it stops most of the processes but a few processes are left hanging.
Why is this happening? I'm using Celery 2.5.3.
Here's an issue from the Github tracker.
https://github.com/celery/django-celery/issues/142
I still use init.d to run celery so this may not apply. With that in mind, stopping the celery service sends the TERM signal to celery. This tells the workers not to accept new tasks but it does not terminate existing tasks. Therefore, depending on how long your tasks take to execute you may see tasks for some time after telling celery to stop. Eventually, they will all shut down unless you have some other problem.
I wasn't able to figure this out but it seemed to be an issue with my older celery version. I found this issue mentioned on their issue-tracker and I guess it points to the same issue:
https://github.com/celery/django-celery/issues/142
I upgraded my celery and django-celery to the 3.x.x versions and this issue was gone.