In my Procfile I have the following:
worker: cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
However, when my app submits a task it never happens, but I think the celery worker is at least sort of working, because
heroku logs --app appname
every so often gives me one of these:
2016-07-22T07:53:21+00:00 app[heroku-redis]: source=REDIS sample#active-connections=14 sample#load-avg-1m=0.03 sample#load-avg-5m=0.09 sample#load-avg-15m=0.085 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664884.0kB sample#memory-free=13458244.0kB sample#memory-cached=187136kB sample#memory-redis=566800bytes sample#hit-rate=0.17778 sample#evicted-keys=0
Also, when I open up bash by running
heroku run bash --app appname
and then type in
cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
It immediately tells me the task has been received and then executes it. I would like to have this happen without me having to manually log in and execute the command - is that possible? Do I need a paid account on heroku to do that?
I figured it out. Turns out you also have to do
heroku ps:scale worker=1 --app appname
Or else you won't actually be running a worker.
Related
I'm using Celery on my Django app.
I can't get the celery task to execute unless I run celery -A appname worker -l info --pool=solo in the terminal.
How do I get it execute when the site is in production?
You need to add a worker process to your Procfile, e.g.
web: gunicorn some_app.wsgi
worker: celery -A appname worker -l info --pool solo
After redeploying, scale up one or more workers, e.g. by running
heroku ps:scale worker=1
I cannot see the tasks in admin.
I followed the steps in https://github.com/jezdez/django-celery-monitor
I used
celery==4.1.1
django-celery-results==1.0.1
django-celery-beat==1.0.1
django_celery_monitor==1.1.2
ran manage.py migrate celery_monitor The migrations went well. ran celery -A lbb events -l info --camera django_celery_monitor.camera.Camera --frequency=2.0 and celery -A lbb worker -l info in separate shells. But still cannot see the tasks I ran in celery-monitor > tasks table.
Running celery command with -E to force event worked for me.
celery -A proj worker -l info -E
The scenario:
Two unrelated web apps with celery background tasks running on same server.
One RabbitMQ instance
Each web app has its own virtualenv (including celery). Same celery version in both virtualenvs.
I use the following command lines to start a worker and a beat process for each application.
celery -A firstapp.tasks worker
celery -A firstapp.tasks beat
celery -A secondapp.tasks worker --hostname foobar
celery -A secondapp.tasks beat
Now everything seems to work OK, but in the worker process of secondapp I get the following error:
Received unregistered task of type 'firstapp.tasks.do_something'
Is there a way to isolate the two celery's from each other?
I'm using Celery version 3.1.16, BTW.
I believe I fixed the problem by creating a RabbitMQ vhost and configuring the second app to use that one.
Create vhost (and set permissions):
sudo rabbitmqctl add_vhost /secondapp
sudo rabbitmqctl set_permissions -p /secondapp guest ".*" ".*" ".*"
And then change the command lines for the second app:
celery -A secondapp.tasks -b amqp://localhost//secondapp worker
celery -A secondapp.tasks -b amqp://localhost//secondapp beat
I have automated tasks working locally but not reomotely in my django app. I was watching a tutorial and the guy said to stop my worker. but before I did that I put my app in maintenance mode, that didn't work. then I ran
heroku ps:restart
that didn't work, then I ran
heroku ps:stop worker
which outputed
Warning: The dynos in your dyno formation will be automatically restarted.
then I ran
heroku ps:scale worker=1
and still nothing. I remind those who are reading this that it worked locally. What am I missing?
my procfile
web: gunicorn gettingstarted.wsgi --log-file -
worker: celery worker -A blog -l info.
While researching I'm seeing mentions of adding beat to the procfile. 2 mentions in fact but this was not discussed in the tutorial I watched. only time celery beat was mentioned is when I added this to the settings.py file
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
and just in case it makes a difference I'm using the djcelery gui to set periodic tasks not configuring the sceduler in the settings.py like I see in a majority of the examples.
If I run the task in my view and call it, it works. But it wont run if I set it up using djcelery
I read the docs and realized I had to add to my worker procfile
-B
so it now looks like this
celery -A proj worker -B -l info
after I made the change I did this
heroku ps:scale worker=0
then
git add .
git commit -am 'added -B'
git push heroku master
then I
heroku ps:scale worker=1
then so I could see the output from heroku
heroku logs -t -p worker
and created a schedule in my admin and it worked. I saw the output in the console. Hope this helps. NOTE it says it's not recomended for production not sure why but if you know or find out let me know
As you've read in the docs, using the -B option is not recommended for production use, you'd better run celery-beat as a different process.
So it's best practice to run it in the server like :
celery beat -A messaging_router --loglevel=INFO
And if you're using supervisor to keep your processes running, you'd add something like the following to your configuration file.
[program:api_beat]
command=/path/to/v_envs/v_env/bin/celery -A project beat --loglevel=info
autostart=true
autorestart=true
user=your_user (echo $USER)
directory=/path/to/project/root
stdout_logfile=/var/log/supervisor/beat.log
stderr_logfile=/var/log/supervisor/beat.log
redirect_stderr=true
environment=EVN_VAR1="whatever",
The reason for this is as the docs say
You can also start embed beat inside the worker by enabling workers -B option, this is convenient if you will never run more than one worker node, but it’s not commonly used and for that reason is not recommended for production use:
Consider you having more than 1 worker, in maintenance, you need to always be vary of which one of the celery workers you've run with the -B and that definitely can be burden.
I am trying to run these commands on Heroku:
python manage.py celery worker --loglevel=info
python manage.py celery beat
i can run them from my terminal, but when i close it the processes stop to run. I probably should change Procfile:
web: gunicorn appname.wsgi
celery: python manage.py celery worker --loglevel=info
celery: python manage.py celery beat
but I dunno how or if I really should change anything.
How do I keep celery processes running on my heroku app?