Django Celery delay() always pushing to default 'celery' queue - python

I'm ripping my hair out with this one.
The crux of my issue is that, using the Django CELERY_DEFAULT_QUEUE setting in my settings.py is not forcing my tasks to go to that particular queue that I've set up. It always goes to the default celery queue in my broker.
However, if I specify queue=proj:dev in the shared_task decorator, it goes to the correct queue. It behaves as expected.
My setup is as follows:
Django code on my localhost (for testing and stuff). Executing task .delay()'s via Django's shell (manage.py shell)
a remote Redis instance configured as my broker
2 celery workers configured on a remote machine setup and waiting for messages from Redis (On Google App Engine - irrelevant perhaps)
NB: For the pieces of code below, I've obscured the project name and used proj as a placeholder.
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery, shared_task
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
app = Celery('proj')
app.config_from_object('django.conf:settings', namespace='CELERY', force=True)
app.autodiscover_tasks()
#shared_task
def add(x, y):
return x + y
settings.py
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'redis://:{}#{}:6379/0'.format(
os.environ.get('REDIS_PASSWORD'),
os.environ.get('REDIS_HOST', 'alice-redis-vm'))
CELERY_DEFAULT_QUEUE = os.environ.get('CELERY_DEFAULT_QUEUE', 'proj:dev')
The idea is that, for right now, I'd like to have different queues for the different environments that my code exists in: dev, staging, prod. Thus, on Google App Engine, I define an environment variable that is passed based on the individual App Engine service.
Steps
So, with the above configuration, I fire up the shell using ./manage.py shell and run add.delay(2, 2). I get an AsyncResult back but Redis monitor clearly shows a message was sent to the default celery queue:
1497566026.117419 [0 155.93.144.189:58887] "LPUSH" "celery"
...
What am I missing?
Not to throw a spanner in the works, but I feel like there was a point today at which this was actually working. But for the life of me, I can't think what part of my brain is failing me here.
Stack versions:
python: 3.5.2
celery: 4.0.2
redis: 2.10.5
django: 1.10.4

This issue is far more simple than I thought - incorrect documentation!!
The Celery documentation asks us to use CELERY_DEFAULT_QUEUE to set the task_default_queue configuration on the celery object.
Ref: http://docs.celeryproject.org/en/latest/userguide/configuration.html#new-lowercase-settings
We should currently use CELERY_TASK_DEFAULT_QUEUE. This is an inconsistency in the naming of all the other settings' names. It was raised on Github here - https://github.com/celery/celery/issues/3772
Solution summary
Using CELERY_DEFAULT_QUEUE in a configuration module (using config_from_object) has no effect on the queue.
Use CELERY_TASK_DEFAULT_QUEUE instead.

If you are here because you're trying to implement a predefined queue using SQS in Celery and find that Celery creates a new queue called "celery" in SQS regardless of what you say, you've reached the end of your journey friend.
Before passing broker_transport_options to Celery, change your default queue and/or specify the queues you will use explicitly. In my case, I need just the one queue so doing the following worked:
celery.conf.task_default_queue = "<YOUR_PREDEFINED_QUEUE_NAME_IN_SQS">

Related

How to propagate errors in python rq worker tasks to Sentry

I have a Flask app with Sentry error tracking. Now I created some tasks with rq, but their errors do not show up in Sentry Issues stream. I can tell the issues aren't filtered out, because the number of filtered issues doesn't increase. The errors show up in heroku logs --tail.
I run the worker with rq worker homework-fetcher -c my_app.rq_sentry
my_app/rq_sentry.py:
import os
import sentry_sdk
from sentry_sdk.integrations.rq import RqIntegration
dsn = os.environ["SENTRY_DSN"]
print(dsn) # I confirmed this appears in logs, so it is initialized
sentry_sdk.init(dsn=dsn, integrations=[RqIntegration()])
Do I have something wrong, or should I set up a full app confirming this and publish a bug report?
Also, I have a (a bit side-) question:
Should I include RqIntegration and RedisIntegration in sentry settings of the app itself? What is the benefit of these?
Thanks a lot for any help
Edit 1: when I schedule task my_app.nonexistent_module, the worker correctly raises error, which is caught by sentry.
So I maybe change my question: how to propagate Exceptions in rq worker tasks to Sentry?
So after 7 months, I figured it out.
The Flask app uses the app factory pattern. Then in the worker, I need to access the database the same way the app does, and for that, I need the app context. So I from app_factory import create_app, and then create_app().app_context().push(). And that's the issue - in the factory function, I also init Sentry for the app itself, so I end up with Sentry initialized twice - once for the app, and once for the worker. Combined with the fact that I called the app initialization in the worker tasks file (not the config), that probably overridden the correct sentry initialization and prevented the tasks from being correctly logged

Can't set airflow's celery_result_backend setting to 'rpc://'

Is 'rpc://' a valid value for the 'celery_result_backend' setting in airflow config? It doesn't seem to work.
Assumed it works, as its a valid value in core celery config.
Since we're using Celery on Redis the URLs both start with: redis://.
If you were using Celery with RabbitMQ the URLs would start with: amqp://
The AWS SQS ones would start with: sqs://
I don't see any queue broker url that starts with rpc:// in the broker documentation.
I do see that the results backend for RabbitMQ could start with rpc:// Since it's just a string passed to the library in question did you install with celery[librabbitmq] and you're not mixing up the two like I almost did?

How to get Celery to load the config from command line?

I am attempting to use celery worker to load a config file at the command line:
celery worker --config=my_settings_module
This doesn't appear to work. celery worker starts and uses its default settings (which include assuming that there is a RabbitMQ server available at localhost:5672) In my config, I would like to point celery to a different place. When I change the amqp settings in the config file to something, Celery didn't appear to care. It still shows the default RabbitMQ settings.
I also tried something bogus
celery worker --config=this_file_does_not_exist
And Celery once again did not care. The worker started and attached to the default RabbitMQ. It's not even looking at the --config setting
I read about how Celery lazy loads. I'm not sure that has anything to do with this.
How do I get celery worker to honor the --config setting?
If you give some invalid module name or a module name which is not in PYTHONPATH, say celery worker --config=invalid_foo, celery will ignore it.
You can verify this by creating a simple config file.
$ celery worker -h
--config=CONFIG Name of the configuration module
As mentioned in celery worker help, you should pass configuration module. Otherwise it will raise an error.
If you just run
celery worker
it will start worker and its output will be colored.
In the same directory, create a file called c.py with this line.
CELERYD_LOG_COLOR = False
Now run
celery worker --config=c
it will start worker and its output will not be colored.
If you run celery worker --config=c.py, it will raise an error.
celery.utils.imports.NotAPackage: Error: Module 'c.py' doesn't exist, or it's not a valid Python module name.
Did you mean 'c'?
I had the exact same error, but I eventually figured out that I had made simple option naming mistakes in the configuration module itself which were not obvious at all.
You see, when you start out and follow the tutorial, you will end up with something that looks like this in your main module:
app = Celery('foo', broker='amqp://user:pass#example.com/vsrv', backend='rpc://')
Which works fine, but then later as you add more and more configuration options you decide to move the options to a separate file, at which point you go ahead and just copy+paste and split the options into lines until it looks like this:
Naïve my_settings.py:
broker='amqp://user:pass#example.com/vsrv'
backend='rpc://'
result_persistent=True
task_acks_late=True
# ... etc. etc.
And there you just fooled yourself! Because in a settings module the options are called broker_url and result_backend instead of just broker and backend as they would be called in the instantiation above.
Corrected my_settings.py:
broker_url='amqp://user:pass#example.com/vsrv'
result_backend='rpc://'
result_persistent=True
task_acks_late=True
# ... etc. etc.
And all of a sudden, your worker boots up just fine with all settings in place.
I hope this will cure a few headaches of fellow celery newbies like us.
Further note:
You can test that celery in fact does not ignore your file by placing a print-statement (or print function call if you're on Py3) into the settings module.

Celery worker hangs without any error

I have a production setup for running celery workers for making a POST / GET request to remote service and storing result, It is handling load around 20k tasks per 15 min.
The problem is that the workers go numb for no reason, no errors, no warnings.
I have tried adding multiprocessing also, the same result.
In log I see the increase in the time of executing task, like succeeded in s
For more details look at https://github.com/celery/celery/issues/2621
If your celery worker get stuck sometimes, you can use strace & lsof to find out at which system call it get stuck.
For example:
$ strace -p 10268 -s 10000
Process 10268 attached - interrupt to quit
recvfrom(5,
10268 is the pid of celery worker, recvfrom(5 means the worker stops at receiving data from file descriptor.
Then you can use lsof to check out what is 5 in this worker process.
lsof -p 10268
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
......
celery 10268 root 5u IPv4 828871825 0t0 TCP 172.16.201.40:36162->10.13.244.205:wap-wsp (ESTABLISHED)
......
It indicates that the worker get stuck at a tcp connection(you can see 5u in FD column).
Some python packages like requests is blocking to wait data from peer, this may cause celery worker hangs, if you are using requests, please make sure to set timeout argument.
Have you seen this page:
https://www.caktusgroup.com/blog/2013/10/30/using-strace-debug-stuck-celery-tasks/
I also faced the issue, when I was using delay shared_task with
celery, kombu, amqp, billiard. After calling the API when I used
delay() for #shared_task, all functions well but when it goes to delay
it hangs up.
So, the issue was In main Application init.py, the below settings
were missing
This will make sure the app is always imported when # Django starts so that shared_task will use this app.
In init.py
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celeryApp
#__all__ = ('celeryApp',)
__all__ = ['celeryApp']
Note1: In place of celery_app put the Aplication name, means the Application mentioned in celery.py import the App and put here
Note2:** If facing only hangs issue in shared task above solution may solve your issue and ignore below matters.
Also wanna mention A=another issue, If anyone facing Error 111
connection issue then please check the versions of amqp==2.2.2,
billiard==3.5.0.3, celery==4.1.0, kombu==4.1.0 whether they are
supporting or not. Mentioned versions are just an example. And Also
check whether redis is install in your system(If any any using redis).
Also make sure you are using Kombu 4.1.0. In the latest version of
Kombu renames async to asynchronous.
Follow this tutorial
Celery Django Link
Add the following to the settings
NB Install redis for both transport and result
# TRANSPORT
CELERY_BROKER_TRANSPORT = 'redis'
CELERY_BROKER_HOST = 'localhost'
CELERY_BROKER_PORT = '6379'
CELERY_BROKER_VHOST = '0'
# RESULT
CELERY_RESULT_BACKEND = 'redis'
CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = '6379'
CELERY_REDIS_DB = '1'

Getting reusable tasks to work in a setup with one celery server and 3k+ django sites, each with its own database

Here's the problem: I have one celery server and 3k+ django sites, each with its own database. New sites (and databases) can be added dynamically.
I'm writing celery tasks which need to be run for each site, through the common celery server. The code is in an app which is meant to be reusable, so it shouldn't be written in a way that ties it to this particular setup.
So. Without mangling the task code to fit my exact setup, how can I make sure that the tasks connect to the correct database when they run?
This is hard to accomplish because of an inherent limitation in Django: The settings are global. So unless all the apps shared the same settings, this is going to be a problem.
You could try spawning new worker processes for every task and create the django environment each time. Don't use django-celery, but use celery directly with something like this in
celeryconfig.py:
from celery import signals
from importlib import import_module
def before_task(task, **kwargs):
settings_module = task.request.kwargs.pop("settings_module", None)
if settings_module:
settings = import_module(settings_module)
from django.conf import setup_environ
setup_environ(settings)
signals.task_prerun.connect(before_task)
CELERYD_MAX_TASKS_PER_CHILD = 1

Categories

Resources