Celery send_task doesn't send tasks - python

I have a server running Celery with RabbitMQ. But when I try to send tasks using send_task, it just returns with an AsyncResult object.
But the actual task is not running (even though the workers and the queues are empty)
c = Celery("tasks", broker="amqp://guest#127.0.0.1//")
c.send_task("tasks.printing.test_print", (100), queue="print_queue", routing_key="printing.test_print")
My celery configuration is:
CELERY_QUEUES = (
Queue('default', routing_key='task.#'),
Queue('print_queue', routing_key='printing.#'),
)
CELERY_DEFAULT_EXCHANGE = 'tasks'
CELERY_ROUTES = {
'tasks.printing.test_print': {
'queue': 'print_queue',
'routing_key': 'printing.test_print',
}}
BROKER_URL = 'amqp://'
I execute only one worker:
celery -A celerymain worker --loglevel=debug
This is it's initial log:
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7eff96903b50
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: amqp://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues] -------------- .> default exchange=tasks(topic) key=task.#
.> print_queue exchange=tasks(topic) key=printing.#
[tasks] . test_print
This is the task:
class test_print(Task):
name = "test_print"
def run(self,a):
log.info("running")
print a
The rabbitMQ queue 'print_queue' stays empty and there is nothing new in the rabbitMQ logs.
I have 4 GB free space so it's not a disk space problem.
What can be the problem here?

I solved the problem by removing the routing_key parameter from send_task.
I don't really know why this was a problem but at least it works

#app.task(name="test_print")
class test_print(Task):
name = "test_print"
def run(self,a):
log.info("running")
print a

Related

Celery keeps trying to connect to localhost instead of Amazon SQS

So I'm trying to setup Celery in my Django project, and using Amazon SQS as my broker. However, Celery keeps trying to find SQS on localhost for some reason.
This is my settings.py:
CELERY_BROKER_TRANSPORT = "sqs"
CELERY_BROKER_USER = env.str("DJANGO_AWS_ACCESS_KEY_ID")
CELERY_BROKER_PASSWORD = env.str("DJANGO_AWS_SECRET_ACCESS_KEY")
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": env.str("DJANGO_AWS_SQS_REGION_NAME", default="us-east-2"),
"polling_interval": 10,
}
CELERY_DEFAULT_QUEUE = "default"
CELERY_ACCEPT_CONTENT = ["application/json"]
CELERY_TASK_SERIALIZER = "json"
CELERY_RESULT_SERIALIZER = "json"
CELERY_CONTENT_ENCODING = "utf-8"
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
CELERY_SQS_QUEUE_NAME = "default"
This is my celery.py :
import os
from celery import Celery
# set the default django settings module
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.production')
app = Celery('consumers') # type: Celery
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
When I start the worker using celery -A src.consumers worker --loglevel=debug, the worker tries to start with the following output and then immediately stops:
-------------- celery#aditya-PC v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.15.0-52-generic-x86_64-with-glibc2.35 2022-10-27 13:56:01
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: consumers:0x7fd77051de40
- ** ---------- .> transport: sqs://AHJJHHFYTA3GHVJHB8:**#localhost:6379//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. src.consumers.tasks.app1_test
How can I make celery not try to connect to localhost, and connect to SQS instead?
Hey this is kind of a non issue. The way it is mounted makes it look like localhost but it actually writes to the queue.
[2022-10-27 18:46:53,847: INFO/MainProcess] Connected to sqs://localhost//
This is a log from our prod env and everything works
create a message in sqs and you will see it gets processed

how to see the celery messages in redis?

I have a celery worker running redis as a broker.
Starting the worker processes gives me this:
celery -A celeryworker worker --loglevel=INFO
-------------- celery#cd38f5e26c28 v5.2.1 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.25-linuxkit-x86_64-with-glibc2.28 2021-12-14 00:22:02
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: myapp:0x7f96dd51af10
- ** ---------- .> transport: redis://redis-container:6379/1
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 6 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> 0 exchange=0(direct) key=0
[tasks]
. app.tasks.bye
. app.tasks.printme
[2021-12-14 00:22:02,708: INFO/MainProcess] Connected to redis://redis-container:6379/1
[2021-12-14 00:22:02,717: INFO/MainProcess] mingle: searching for neighbors
[2021-12-14 00:22:03,740: INFO/MainProcess] mingle: all alone
[2021-12-14 00:22:03,762: INFO/MainProcess] celery#cd38f5e26c28 ready.
[2021-12-14 00:22:23,332: INFO/MainProcess] Task app.task.bye[7e28e6a0-8aaa-4609-bd85-9312e91cb355] received
[2021-12-14 00:23:23,326: INFO/ForkPoolWorker-3] Task app.tasks.bye[7e28e6a0-8aaa-4609-bd85-9312e91cb355] succeeded in 60.061842500006605s: 'the text was byebye!!'
This is what I can see in redis right after starting the celery workers:
127.0.0.1:6379[1]> keys *
1) "_kombu.binding.0"
2) "_kombu.binding.celery.pidbox"
3) "_kombu.binding.celeryev"
Even if I set a long timer on my tasks (sleep(60)) the tasks will take 60 seconds to run, but I still don't see anything in my redis container.
mget <key> returns nil for all keys above.
I was expecting to see messages incoming in form of ID or something into Redis (I can see messages if I use SQS as broker, but not for redis).
Your messages are picked immediately by your worker.
To actually see where redis stores them, stop your worker process and then publish it (You can execute task.delay(*args, **kwargs) from python shell).
You'll find your messages under celery key in your redis.
Celery Keys in redis
Note: Check your redis broker url and which logical database it is using

Received(queue) or Executing multiple tasks at same time in celery

I added 10 tasks in celery process. It picks one by one (default celery process). After the completion of first task it picks the second task received and executed .
Those tasks are not depending on each other. so I want to run simultaneously or I want to received all the 10 tasks in queue(celery console) and executing one by one.
-------------- celery#rana-04 v5.1.2 (sun-harmonics)
--- ***** -----
-- ******* ---- Windows-10-10.0.18362-SP0 2021-08-16 13:18:28
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: rana:0x2507baac198
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 8 (solo)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
Celery Command :
celery -A rana worker --loglevel=INFO --without-gossip --without-mingle --without-heartbeat -Ofair --pool=solo
Any command changes or configurations needed ? Thanks in advance
I think you should not use --pool=solo in your command. Because the pool of worker just can hold one task at the same time.

Celery - how to use multiple queues?

I want to create a multiple queues for different tasks. For example emailqueue to sending emails or pipedrive queue to sync tasks with pipedrive API so email does not have to wait until all pipedrives are synced and vice versa.
I'm new in routing and I tried two approaches to create queues but none of them seemes to work.
This is a preferred approach. I tried to define queue inside #task decorator
#task(bind=True, queue='pipedrivequeue')
def backsync_lead(self,lead_id):
settings.py
CELERY_ROUTES = { # tried CELERY_TASK_ROUTES too
'pipedrive.tasks.*': {'queue': 'pipedrivequeue'},
...
}
In both cases, when I run celery worker manually, I see only one default celery queue.
(project) milano#milano-PC:~/PycharmProjects/project$ celery -A project.celery worker -l info
-------------- celery#milano-PC v4.2.2 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.15.0-47-generic-x86_64-with-Ubuntu-18.04-bionic 2019-04-12 17:17:05
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: project:0x7f3b47f66cf8
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost/
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. project.apps.apis.pipedrive.tasks.backsync_all_stages
. project.apps.apis.pipedrive.tasks.backsync_lead
As you can see in this line:
-------------- [queues]
.> celery exchange=celery(direct) key=celery
There is probably just one queue. I want to use this queue only for tasks without queue specified.
Do you know where is the problem?
EDIT
(project) milano#milano-PC:~/PycharmProjects/peoject$ celery inspect active_queues
Error: No nodes replied within time constraint.
You need to run a worker with the queue named explicitly, then django will be able to feed into that queue;
celery worker -A project.celery -l info # Default queue worker
celery worker -A project.celery -l info -Q pipedrivequeue # pipedrivequeue worker
celery worker -A project.celery -l info -Q testqueue # testqueue worker

Celery: Which are the queues consumed if the -Q option is not specified

According to the Celery documentation, the -Q/--queues command line option can be used for:
-Q, --queues
List of queues to enable for this worker, separated by comma. By default all configured queues are enabled. Example: -Q video,image
However I don't understand what does it mean with configured queues here. Does this mean all queues known to Celery, including the default one? Or only the ones defined in the task_queues config option? Does the task_create_missing_queues option affect this?
If you haven't configured anything, it will consume from celery queue and you can as you can see from logs
celery worker -A t
-------------- celery#pavilion v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Linux-4.4.0-79-generic-x86_64-with-Ubuntu-16.04-xenial 2017-06-09 10:39:14
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f15cf9cdfd0
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: rpc://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
You can also configure celery to consume a set of queues by default like this
from celery import Celery
from kombu import Queue
app = Celery(broker='amqp://guest#localhost//', backend='rpc')
app.conf.task_queues = (Queue('foo'), Queue('bar'))
Now all workers will consume foo, bar queues by default.
-------------- [queues]
.> bar exchange=celery(direct) key=celery
.> foo exchange=celery(direct) key=celery
I was facing an issue where when I would execute
$ celery -A myCeleryConfig worker -Q myQueue2
I would get error
celery.exceptions.ImproperlyConfigured: Trying to select queue subset of ['myQueue2'], but queue 'myQueue2' isn't defined in the `task_queues` setting.
The documentation for the task_queues setting was unclear to me. It does state
If you really want to configure advanced routing, this setting should be a list of kombu.Queue objects the worker will consume from.
I wasn't sure what it meant by this. No code examples are provided in the documentation. But, thanks to #Chillar's response, I came to find that configuring
app.conf.task_queues = (Queue('myQueue1'), Queue('myQueue2'))
solved the issue. I now see
-------------- [queues]
.> myQueue1 exchange=myQueue1 key=myQueue1
.> myQueue2 exchange=myQueue2 key=myQueue2
when I start the worker, indicating the queue is now registered.

Categories

Resources