I actually had issues printing mine on command prompt because I was using the wrong command but I found a link to a project which I forked Project
(If on Mac ) celery -A Project worker --loglevel=info
(If on Windows) celery -A Project worker -l info --pool=solo
Run
celery -A Project worker --loglevel=info
in project folder where your celery object is created.
You can use the file name to get your tasks discovered.
For Example: if your file name is tasks.py and celery object is created in that file.
Command will be:
celery -A tasks worker --loglevel=info
Related
I'm using Celery on my Django app.
I can't get the celery task to execute unless I run celery -A appname worker -l info --pool=solo in the terminal.
How do I get it execute when the site is in production?
You need to add a worker process to your Procfile, e.g.
web: gunicorn some_app.wsgi
worker: celery -A appname worker -l info --pool solo
After redeploying, scale up one or more workers, e.g. by running
heroku ps:scale worker=1
I cannot see the tasks in admin.
I followed the steps in https://github.com/jezdez/django-celery-monitor
I used
celery==4.1.1
django-celery-results==1.0.1
django-celery-beat==1.0.1
django_celery_monitor==1.1.2
ran manage.py migrate celery_monitor The migrations went well. ran celery -A lbb events -l info --camera django_celery_monitor.camera.Camera --frequency=2.0 and celery -A lbb worker -l info in separate shells. But still cannot see the tasks I ran in celery-monitor > tasks table.
Running celery command with -E to force event worked for me.
celery -A proj worker -l info -E
The scenario:
Two unrelated web apps with celery background tasks running on same server.
One RabbitMQ instance
Each web app has its own virtualenv (including celery). Same celery version in both virtualenvs.
I use the following command lines to start a worker and a beat process for each application.
celery -A firstapp.tasks worker
celery -A firstapp.tasks beat
celery -A secondapp.tasks worker --hostname foobar
celery -A secondapp.tasks beat
Now everything seems to work OK, but in the worker process of secondapp I get the following error:
Received unregistered task of type 'firstapp.tasks.do_something'
Is there a way to isolate the two celery's from each other?
I'm using Celery version 3.1.16, BTW.
I believe I fixed the problem by creating a RabbitMQ vhost and configuring the second app to use that one.
Create vhost (and set permissions):
sudo rabbitmqctl add_vhost /secondapp
sudo rabbitmqctl set_permissions -p /secondapp guest ".*" ".*" ".*"
And then change the command lines for the second app:
celery -A secondapp.tasks -b amqp://localhost//secondapp worker
celery -A secondapp.tasks -b amqp://localhost//secondapp beat
I am running celery on two servers with one redis as a broker.
Celery start command looks like following:
celery multi start 2 -A app_name
Flower start command:
celery flower -A app_name --address=10.41.31.210 --port=5555
In flower's output there are some warnings:
WARNING:flower.api.control:'stats' inspect method failed
WARNING:flower.api.control:'active_queues' inspect method failed
WARNING:flower.api.control:'registered' inspect method failed
WARNING:flower.api.control:'scheduled' inspect method failed
WARNING:flower.api.control:'active' inspect method failed
WARNING:flower.api.control:'reserved' inspect method failed
WARNING:flower.api.control:'revoked' inspect method failed
WARNING:flower.api.control:'conf' inspect method failed
And the most strange thing for me - not all workers are displayed in Flower's dashboard. Seems that after every flower restart only some workers are displayed. Due to my start scripts - there should be at least 8 workers, but I see 4 or sometimes 6.
Looking for any solution or advice. Thank you.
P.s I don't have any problems with the same services when there is only one server used for celery workers.
Problem here that, flower starts before celery is ready
This could be easily checked with celery inspect ping
Here is example from my project start_flower.sh
#!/bin/sh
until timeout -t 10 celery -A project inspect ping; do
>&2 echo "Celery workers not available"
done
echo 'Starting flower'
celery -A project flower
Try it :
shell > celery -A app_name worker -l info
another shell > celery -A djangocelery flower
it works....
I think the solution is to run a celery flower command like that: celery -A app_name flower --address=10.41.31.210 --port=5555 the address and port arguments after flower and -A app_name before it.
I am trying to run these commands on Heroku:
python manage.py celery worker --loglevel=info
python manage.py celery beat
i can run them from my terminal, but when i close it the processes stop to run. I probably should change Procfile:
web: gunicorn appname.wsgi
celery: python manage.py celery worker --loglevel=info
celery: python manage.py celery beat
but I dunno how or if I really should change anything.
How do I keep celery processes running on my heroku app?