Flower doesn't display all workers for celery - python

I am running celery on two servers with one redis as a broker.
Celery start command looks like following:
celery multi start 2 -A app_name
Flower start command:
celery flower -A app_name --address=10.41.31.210 --port=5555
In flower's output there are some warnings:
WARNING:flower.api.control:'stats' inspect method failed
WARNING:flower.api.control:'active_queues' inspect method failed
WARNING:flower.api.control:'registered' inspect method failed
WARNING:flower.api.control:'scheduled' inspect method failed
WARNING:flower.api.control:'active' inspect method failed
WARNING:flower.api.control:'reserved' inspect method failed
WARNING:flower.api.control:'revoked' inspect method failed
WARNING:flower.api.control:'conf' inspect method failed
And the most strange thing for me - not all workers are displayed in Flower's dashboard. Seems that after every flower restart only some workers are displayed. Due to my start scripts - there should be at least 8 workers, but I see 4 or sometimes 6.
Looking for any solution or advice. Thank you.
P.s I don't have any problems with the same services when there is only one server used for celery workers.

Problem here that, flower starts before celery is ready
This could be easily checked with celery inspect ping
Here is example from my project start_flower.sh
#!/bin/sh
until timeout -t 10 celery -A project inspect ping; do
>&2 echo "Celery workers not available"
done
echo 'Starting flower'
celery -A project flower

Try it :
shell > celery -A app_name worker -l info
another shell > celery -A djangocelery flower
it works....

I think the solution is to run a celery flower command like that: celery -A app_name flower --address=10.41.31.210 --port=5555 the address and port arguments after flower and -A app_name before it.

Related

How to Print celery worker details on windows and mac

I actually had issues printing mine on command prompt because I was using the wrong command but I found a link to a project which I forked Project
(If on Mac ) celery -A Project worker --loglevel=info
(If on Windows) celery -A Project worker -l info --pool=solo
Run
celery -A Project worker --loglevel=info
in project folder where your celery object is created.
You can use the file name to get your tasks discovered.
For Example: if your file name is tasks.py and celery object is created in that file.
Command will be:
celery -A tasks worker --loglevel=info

Django celery monitor doesn't show any tasks

I cannot see the tasks in admin.
I followed the steps in https://github.com/jezdez/django-celery-monitor
I used
celery==4.1.1
django-celery-results==1.0.1
django-celery-beat==1.0.1
django_celery_monitor==1.1.2
ran manage.py migrate celery_monitor The migrations went well. ran celery -A lbb events -l info --camera django_celery_monitor.camera.Camera --frequency=2.0 and celery -A lbb worker -l info in separate shells. But still cannot see the tasks I ran in celery-monitor > tasks table.
Running celery command with -E to force event worked for me.
celery -A proj worker -l info -E

Two applications using celery scheduled tasks: "Received unregistered task" errors in one of the workers

The scenario:
Two unrelated web apps with celery background tasks running on same server.
One RabbitMQ instance
Each web app has its own virtualenv (including celery). Same celery version in both virtualenvs.
I use the following command lines to start a worker and a beat process for each application.
celery -A firstapp.tasks worker
celery -A firstapp.tasks beat
celery -A secondapp.tasks worker --hostname foobar
celery -A secondapp.tasks beat
Now everything seems to work OK, but in the worker process of secondapp I get the following error:
Received unregistered task of type 'firstapp.tasks.do_something'
Is there a way to isolate the two celery's from each other?
I'm using Celery version 3.1.16, BTW.
I believe I fixed the problem by creating a RabbitMQ vhost and configuring the second app to use that one.
Create vhost (and set permissions):
sudo rabbitmqctl add_vhost /secondapp
sudo rabbitmqctl set_permissions -p /secondapp guest ".*" ".*" ".*"
And then change the command lines for the second app:
celery -A secondapp.tasks -b amqp://localhost//secondapp worker
celery -A secondapp.tasks -b amqp://localhost//secondapp beat

Need to run celery worker during Django unittest

I am working on Django based web app. During unittest, I need to write a test which needs "Celery worker" running in the background.
I have already used:
CELERY_EAGER_PROPAGATES_EXCEPTIONS=True
CELERY_ALWAYS_EAGER=True
BROKER_BACKEND='memory
In over_ride settings, but these are not running celery worker for me in background when needed.
Any help would much appreciated.
Celery won't get run by Django automatically.
You can start a worker process by running from your project root:
$ celery -A my_proj worker
my_proj should be the application name you configured with app = Celery('my_proj')

Celery Cloudamqp creates new connection for each task

I am currently using nitrous.io running Django with Celery and then Cloudamqp as my broker with the free plan (max 3 connections). I'm able to connect just fine and start up a periodic task just fine.
When I run
celery -A proj worker -l info
2 connections are created immediately on Cloudamqp and I am able to manually create multiple tasks on a 3rd connection and all is well. However, when I run celery beat with
celery -A proj worker -B -l info
all 3 connections are used and if celery beat creates 1 or more new tasks, another 4th connection will be created thus going over the maximum connections allowed.
I've tried and currently have set
BROKER_POOL_LIMIT = 1
but that doesn't seem to limit the connections
I've also tried
celery -A proj worker -B -l info
celery -A proj worker -B -l info -c 1
celery -A proj worker -B -l info --autoscale=1,1 -c 1
with no luck.
Why is there 2 connections made immediately that are doing nothing?
Is there someway limit the initial celery connections to 0 or 1 or have the tasks share/run on the celery beat connection?
While it does not actually limit connections, another user found that disabling the connection pool reduced the number of connections in practice:
https://stackoverflow.com/a/23563018/1867779
BROKER_POOL_LIMIT = 0
The Redis and Mongo backends have their own connection limit parameters.
http://docs.celeryproject.org/en/master/configuration.html#celery-redis-max-connections
http://docs.celeryproject.org/en/master/configuration.html#celery-mongodb-backend-settings (using the max_pool_size parameter)
The AMQP backend does not have such a setting.
http://docs.celeryproject.org/en/master/configuration.html#amqp-backend-settings
Given that, I'm not sure what BROKER_POOL_LIMIT is meant to do, but I'd really like to see CELERY_AMQP_MAX_CONNECTIONS.
Here's a related, unanswered question: How can I minimise connections with django-celery when using CloudAMQP through dotcloud?

Categories

Resources