How can I use two different celery project which consumes messages from single RabbitMQ installation.
Generally, these scripts work fine if I use different rabbitmq for them. But on production machine, I need to share the same RabbitMQ backend for them.
Note: Due to some constraint, I cannot merge new projects in existing, so it will be two different project.
RabbitMQ has the ability to create virtual message brokers called virtual
hosts or vhosts. Each one is essentially a mini-RabbitMQ server with its own queues. This lets you safely use one RabbitMQ server for multiple applications.
rabbitmqctl add_vhost command creates a vhost.
By default Celery uses the / default vhost:
celery worker --broker=amqp://guest#localhost//
But you can use any custom vhost:
celery worker --broker=amqp://guest#localhost/myvhost
Examples:
rabbitmqctl add_vhost new_host
rabbitmqctl add_vhost /another_host
celery worker --broker=amqp://guest#localhost/new_host
celery worker --broker=amqp://guest#localhost//another_host
Related
I have a uwsgi based application (one worker) deployed in a Kubernetes cluster and I would like to have worker crashes/failures handled by Kubernetes instead of uwsgi.
I checked the following docs, but that wasn't helpful
https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
https://uwsgi-docs.readthedocs.io/en/latest/Options.html
Appreciate your hints.
I am having issues while running my Python Flask application from Docker pull (remote pull).
In my app I had used RabbitMQ as message broker, and Celery as task scheduler. It is working as expected when running locally, But when I put my application on Docker, and Docker pull it from remote system, it runs fine, but Celery and RabbitMQ are not running with it, so all tasks (with method.delay()) are running infinitely and http request is not being processed.
I need help in putting my Python Flask application to Docker, as my application has asynchronous tasks to be processed with Celery. I am not aware about how to modify docker-compose.yml for including Celery service.
Thanks is advance.
I think you need to link celery container with rabbitmq.
From https://docs.docker.com/compose/compose-file/#links
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
links:
- rabbitmq
Or
- rabbitmq:rabbitmq
I have a unit-tests for my django project.
Some of views in my django project run celery tasks and I want to check database after these tasks.
I have a separated tests for the celery tasks, where I call them without .delay() method.
The main problem, what is the best and cleanest way to have a celery worker during the jenkins job?
Currently I just run nohup celery -A myqpp worker & before test and kill all running celery at the end of the job.
The best and cleanest way is not to have any celery workers during the Jenkins job, neither any queue/result backend. Utilize CELERY_ALWAYS_EAGER setting to execute your tasks in unit tests locally by blocking until the task returns.
Check out more in Celery documentation: CELERY_ALWAYS_EAGER docs
Just to extend answer about always eager mode, you can see my answer on other question, how you can run celery worker from test setUp https://stackoverflow.com/a/42107423/590233
But few tings need to be done there:
Connect celery worker to test db
Somehow run message broker instance ... (i think that you run it already before test, but cleanest way is to spawn broker instance from setUp as an celery worker)
I use Celery and RabbintMQ for my project.
I have 3 servers (Main, A, B). A and B are calculating the tasks from Main server, then they post response to him.
This is an organizational question: where I need to install Celery and RabbitMQ?
As I think, RabbitMQ must be install on Main server (create rabbitmq user, etc.), Celery - on A and B servers. Or A and B also needs to install RabbitMQ?
Thanks!
There is no need to install RabbitMQ on all servers. Installing it in one server is sufficient. You just need to route tasks to A & B servers.
Also, remember AMQP is network protocol, the producers, consumers and the broker can all reside on same or different machines. Following are the possible arrangements for them.
Producer: A producer is a user application that sends messages.
Broker: A broker receives massages from producer and router them to consumer. A broker consists an exchange and one or more queues.
Consumer: A consumer is an application that receives messages and process them.
We're using celery eta tasks to schedule tasks FAR (like months) in the future.
Now using the rabbitMQ backend because the mongo backend did loose such tasks on a worker restart.
Actually tasks with the rabbitMQ backend seem to be persistent across celery and rabbitMQ restarts, BUT revoke messages seem to be lost on rabbitMQ restarts.
I guess that if revoke messages are lost, those eta tasks that should be killed will execute anyway.
This may be helpful from the documentation (Persistent Revokes):
The list of revoked tasks is in-memory so if all workers restart the
list of revoked ids will also vanish. If you want to preserve this
list between restarts you need to specify a file for these to be
stored in by using the –statedb argument to celery worker:
$ celery -A proj worker -l info --statedb=/var/run/celery/worker.state