kombu producer and celery consumer - python

Is it possible for a kombu producer to queue a message on rabbitmq to be processed by celery workers? It seems the celery workers do not understand the message put by the kombu producer.

I understand that to communicate with RabbitMQ, you would require any lib that abides by AMQP specification.
Kombu is one such lib which can bind to the RabbitMQ exchange, listen and process messages by spawning numerous consumers.
Celery is nothing but an asynchronous task generator which has numerous add-ons like in-memory processing, capacity to write to DB/Redis cache, perform complex operations and so on.
Said that now you can use kombu to read and write messages in/from RMQ and use celery workers to process the message.

Related

Celery revoke leaving zombie ffmpeg process

We are using celery, rabbitmq and ffmpeg-python to read video streams. In a celery task (shared task), we are calling ffmpeg-python which internally uses subprocess to run ffmpeg. Whenever we revoke tasks in celery, the ffmpeg processes become defunct/zombie. Over a time they start getting accumulated and exhausting our pids. Is there any way to gracefully exit the celery task along with its subprocess?
Does this SO answer help you?
Quote:
from celery import Celery
celery = Celery('vwadaptor', broker='redis://workerdb:6379/0',backend='redis://workerdb:6379/0')
celery.control.broadcast('shutdown', destination=[<celery_worker_name>])
[EDIT]
Alternatively, here is a python module that provides warm and cold shutdown behaviour on Celery. Disclaimer: I haven't used it.

Celery & RabbitMQ configuration

I use Celery and RabbintMQ for my project.
I have 3 servers (Main, A, B). A and B are calculating the tasks from Main server, then they post response to him.
This is an organizational question: where I need to install Celery and RabbitMQ?
As I think, RabbitMQ must be install on Main server (create rabbitmq user, etc.), Celery - on A and B servers. Or A and B also needs to install RabbitMQ?
Thanks!
There is no need to install RabbitMQ on all servers. Installing it in one server is sufficient. You just need to route tasks to A & B servers.
Also, remember AMQP is network protocol, the producers, consumers and the broker can all reside on same or different machines. Following are the possible arrangements for them.
Producer: A producer is a user application that sends messages.
Broker: A broker receives massages from producer and router them to consumer. A broker consists an exchange and one or more queues.
Consumer: A consumer is an application that receives messages and process them.

Celery revocations are lost on rabbitMQ restart

We're using celery eta tasks to schedule tasks FAR (like months) in the future.
Now using the rabbitMQ backend because the mongo backend did loose such tasks on a worker restart.
Actually tasks with the rabbitMQ backend seem to be persistent across celery and rabbitMQ restarts, BUT revoke messages seem to be lost on rabbitMQ restarts.
I guess that if revoke messages are lost, those eta tasks that should be killed will execute anyway.
This may be helpful from the documentation (Persistent Revokes):
The list of revoked tasks is in-memory so if all workers restart the
list of revoked ids will also vanish. If you want to preserve this
list between restarts you need to specify a file for these to be
stored in by using the –statedb argument to celery worker:
$ celery -A proj worker -l info --statedb=/var/run/celery/worker.state

Notification from Celery when it cannot connect to broker?

I have a celery setup and running fine using rabbitmq as the broker. I also have CELERY_SEND_TASK_ERROR_EMAILS=True in my settings. I receive emails if there is an Exception
thrown while executing the tasks which is fine.
My question is is there a way either with celery or rabbitmq, to receive an error notification from either celery if the broker connection cannot be established or rabbitmq itself if the rabbitmq-server running dies.
I think the right tool for this job is a process control system like supervisord, which launches/watches processes and can trigger events when those processes die or restart. More specifically, using the plugin superlance, you can send an email when a process dies.

Multi Celery projects with same RabbitMQ broker backend process

How can I use two different celery project which consumes messages from single RabbitMQ installation.
Generally, these scripts work fine if I use different rabbitmq for them. But on production machine, I need to share the same RabbitMQ backend for them.
Note: Due to some constraint, I cannot merge new projects in existing, so it will be two different project.
RabbitMQ has the ability to create virtual message brokers called virtual
hosts or vhosts. Each one is essentially a mini-RabbitMQ server with its own queues. This lets you safely use one RabbitMQ server for multiple applications.
rabbitmqctl add_vhost command creates a vhost.
By default Celery uses the / default vhost:
celery worker --broker=amqp://guest#localhost//
But you can use any custom vhost:
celery worker --broker=amqp://guest#localhost/myvhost
Examples:
rabbitmqctl add_vhost new_host
rabbitmqctl add_vhost /another_host
celery worker --broker=amqp://guest#localhost/new_host
celery worker --broker=amqp://guest#localhost//another_host

Categories

Resources