Celery receives periodic tasks but doesn't execute them - python

I use celery to run periodic tasks in my Django DRF app. Unfortunately, registered tasks are not executed.
Project structure:
project_name
___ cron_tasks
______ __init__.py
______ celery.py
celery.py:
app = Celery('cron_tasks', include=['cron_tasks.celery'])
app.conf.broker_url = settings.RABBITMQ_URL
app.autodiscover_tasks()
app.conf.redbeat_redis_url = settings.REDBEAT_REDIS_URL
app.conf.broker_pool_limit = 1
app.conf.broker_heartbeat = None
app.conf.broker_connection_timeout = 30
app.conf.worker_prefetch_multiplier = 1
app.conf.beat_schedule = {
'first_warning_overdue': {
'task': 'cron_tasks.celery.test_task',
'schedule': 60.0, # seconds
'options': {'queue': 'default', 'expires': 43100.0}
}
}
#shared_task
def test_task():
app.send_task('cron_tasks.celery.test_action')
def test_action():
print('action!') # print is not executed
# I also tried to change the data, but it never happens too.
from django.contrib.auth import get_user_model
u = get_user_model().objects.get(id=1)
u.first_name = "testttt"
u.save()
setting.py:
RABBITMQ_URL = os.environ.get('RABBITMQ_URL')
REDBEAT_REDIS_URL = os.environ.get('REDBEAT_REDIS_URL')
CELERY_BROKER_URL = os.environ.get('RABBITMQ_URL')
CELERYD_TASK_SOFT_TIME_LIMIT = 60
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = os.environ.get('REDBEAT_REDIS_URL')
CELERY_IMPORTS = ("cron_tasks.celery", )
from kombu import Queue
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default'),
)
CELERY_CREATE_MISSING_QUEUES = True
redbeat_redis_url = REDBEAT_REDIS_URL
Rabbitmq is running properly. I can see it's there in the celery worker terminal output:
- ** ---------- .> transport: amqp://admin:**#localhost:5672/my_vhost
Redis is pinging well. I use redis to send beats.
I run:
celery beat -S redbeat.RedBeatScheduler -A cron_tasks.celery:app --loglevel=debug
It shows:
[2019-02-15 09:32:44,477: DEBUG/MainProcess] beat: Waking up in 10.00 seconds.
[2019-02-15 09:32:54,480: DEBUG/MainProcess] beat: Extending lock...
[2019-02-15 09:32:54,481: DEBUG/MainProcess] Selecting tasks
[2019-02-15 09:32:54,482: INFO/MainProcess] Loading 1 tasks
[2019-02-15 09:32:54,483: INFO/MainProcess] Scheduler: Sending due task first_warning_overdue (cron_tasks.celery.test_task)
[2019-02-15 09:32:54,484: DEBUG/MainProcess] cron_tasks.celery.test_task sent. id->f89083aa-11dc-41fc-9ebe-541840951f8f
Celery worker is run this way:
celery worker -Q default -A cron_tasks.celery:app -n .%%h --without-gossip --without-mingle --without-heartbeat --loglevel=info --max-memory-per-child=512000
It says:
-------------- celery#.%me.local v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Darwin-16.7.0-x86_64-i386-64bit 2019-02-15 09:31:50
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: cron_tasks:0x10e2a5ac8
- ** ---------- .> transport: amqp://admin:**#localhost:5672/my_vhost
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
[tasks]
. cron_tasks.celery.test_task
[2019-02-15 09:31:50,833: INFO/MainProcess] Connected to amqp://admin:**#127.0.0.1:5672/my_vhost
[2019-02-15 09:31:50,867: INFO/MainProcess] celery#.%me.local ready.
[2019-02-15 09:41:46,218: INFO/MainProcess] Received task: cron_tasks.celery.test_task[3c121f04-af3b-4cbe-826b-a32da6cc156e] expires:[2019-02-15 21:40:05.779231+00:00]
[2019-02-15 09:41:46,220: INFO/ForkPoolWorker-2] Task cron_tasks.celery.test_task[3c121f04-af3b-4cbe-826b-a32da6cc156e] succeeded in 0.001324941000000024s: None
Expected behavior:
This should run my test_action().
But, even though the celery worker output says succeeded in 0.001324941000000024s, the function never executes.

Related

Celery keeps trying to connect to localhost instead of Amazon SQS

So I'm trying to setup Celery in my Django project, and using Amazon SQS as my broker. However, Celery keeps trying to find SQS on localhost for some reason.
This is my settings.py:
CELERY_BROKER_TRANSPORT = "sqs"
CELERY_BROKER_USER = env.str("DJANGO_AWS_ACCESS_KEY_ID")
CELERY_BROKER_PASSWORD = env.str("DJANGO_AWS_SECRET_ACCESS_KEY")
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": env.str("DJANGO_AWS_SQS_REGION_NAME", default="us-east-2"),
"polling_interval": 10,
}
CELERY_DEFAULT_QUEUE = "default"
CELERY_ACCEPT_CONTENT = ["application/json"]
CELERY_TASK_SERIALIZER = "json"
CELERY_RESULT_SERIALIZER = "json"
CELERY_CONTENT_ENCODING = "utf-8"
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
CELERY_SQS_QUEUE_NAME = "default"
This is my celery.py :
import os
from celery import Celery
# set the default django settings module
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.production')
app = Celery('consumers') # type: Celery
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
When I start the worker using celery -A src.consumers worker --loglevel=debug, the worker tries to start with the following output and then immediately stops:
-------------- celery#aditya-PC v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.15.0-52-generic-x86_64-with-glibc2.35 2022-10-27 13:56:01
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: consumers:0x7fd77051de40
- ** ---------- .> transport: sqs://AHJJHHFYTA3GHVJHB8:**#localhost:6379//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. src.consumers.tasks.app1_test
How can I make celery not try to connect to localhost, and connect to SQS instead?
Hey this is kind of a non issue. The way it is mounted makes it look like localhost but it actually writes to the queue.
[2022-10-27 18:46:53,847: INFO/MainProcess] Connected to sqs://localhost//
This is a log from our prod env and everything works
create a message in sqs and you will see it gets processed

how to see the celery messages in redis?

I have a celery worker running redis as a broker.
Starting the worker processes gives me this:
celery -A celeryworker worker --loglevel=INFO
-------------- celery#cd38f5e26c28 v5.2.1 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.25-linuxkit-x86_64-with-glibc2.28 2021-12-14 00:22:02
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: myapp:0x7f96dd51af10
- ** ---------- .> transport: redis://redis-container:6379/1
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 6 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> 0 exchange=0(direct) key=0
[tasks]
. app.tasks.bye
. app.tasks.printme
[2021-12-14 00:22:02,708: INFO/MainProcess] Connected to redis://redis-container:6379/1
[2021-12-14 00:22:02,717: INFO/MainProcess] mingle: searching for neighbors
[2021-12-14 00:22:03,740: INFO/MainProcess] mingle: all alone
[2021-12-14 00:22:03,762: INFO/MainProcess] celery#cd38f5e26c28 ready.
[2021-12-14 00:22:23,332: INFO/MainProcess] Task app.task.bye[7e28e6a0-8aaa-4609-bd85-9312e91cb355] received
[2021-12-14 00:23:23,326: INFO/ForkPoolWorker-3] Task app.tasks.bye[7e28e6a0-8aaa-4609-bd85-9312e91cb355] succeeded in 60.061842500006605s: 'the text was byebye!!'
This is what I can see in redis right after starting the celery workers:
127.0.0.1:6379[1]> keys *
1) "_kombu.binding.0"
2) "_kombu.binding.celery.pidbox"
3) "_kombu.binding.celeryev"
Even if I set a long timer on my tasks (sleep(60)) the tasks will take 60 seconds to run, but I still don't see anything in my redis container.
mget <key> returns nil for all keys above.
I was expecting to see messages incoming in form of ID or something into Redis (I can see messages if I use SQS as broker, but not for redis).
Your messages are picked immediately by your worker.
To actually see where redis stores them, stop your worker process and then publish it (You can execute task.delay(*args, **kwargs) from python shell).
You'll find your messages under celery key in your redis.
Celery Keys in redis
Note: Check your redis broker url and which logical database it is using

Celery consumes all queues

I'm trying to achieve a following configuration:
Send a message to RabbitMQ.
Copy this message to 2 different queues.
Run 2 consumers, where each of them would consume from its own queue.
So, I have this to send messages:
def publish_message():
with app.producer_pool.acquire(block=True) as producer:
producer.publish(
body={"TEST": "OK"},
exchange='myexchange',
routing_key='mykey',
)
I create my consumers this way:
with app.pool.acquire(block=True) as conn:
exchange = kombu.Exchange(
name="myexchange",
type="direct",
durable=True,
channel=conn,
)
exchange.declare()
queue1 = kombu.Queue(
name="myqueue",
exchange=exchange,
routing_key="mykey",
channel=conn,
# message_ttl=600,
# queue_arguments={
# "x-queue-type": "classic"
# },
durable=True
)
queue1.declare()
queue2 = kombu.Queue(
name="myotherqueue",
exchange=exchange,
routing_key="mykey",
channel=conn,
# message_ttl=600,
# queue_arguments={
# "x-queue-type": "classic"
# },
durable=True
)
queue2.declare()
class MyConsumer1(bootsteps.ConsumerStep):
def get_consumers(self, channel):
return [
kombu.Consumer(
channel,
queues=[queue1],
callbacks=[self.handle],
accept=["json"]
)
]
def handle(self, body, message):
print(f"\n### 1 ###\nBODY: {body}\nCONS: {self.consumers}\n#########\n")
message.ack()
class MyConsumer2(bootsteps.ConsumerStep):
def get_consumers(self, channel):
return [
kombu.Consumer(
channel,
queues=[queue2],
callbacks=[self.handle],
accept=["json"]
)
]
def handle(self, body, message):
print(f"\n### 2 ###\nBODY: {body}\nCONS: {self.consumers}\n#########\n")
message.ack()
app.steps["consumer"].add(MyConsumer1)
app.steps["consumer"].add(MyConsumer2)
But when I run worker passing -Q myqueue I get this:
(venv) ➜ src git:(rabbitmq-experiment) ✗ celery -A settings worker -X myotherqueue --hostname 1#%h
-------------- 1#gonczor v5.0.5 (singularity)
--- ***** -----
-- ******* ---- Linux-5.11.0-37-generic-x86_64-with-glibc2.31 2021-10-14 19:37:26
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: blacksheeplearns:0x7f5abbbf6d30
- ** ---------- .> transport: amqp://admin:**#localhost:5672//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[2021-10-14 19:37:27,683: WARNING/MainProcess] /home/gonczor/Projects/Learn Web Dev/learn-web-dev/venv/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2021-10-14 19:37:30,724: WARNING/MainProcess] ### 2 ###
BODY: {'TEST': 'OK'}
CONS: [<Consumer: [<Queue myotherqueue -> <Exchange myexchange(direct) bound to chan:4> -> mykey bound to chan:4>]>]
#########
[2021-10-14 19:37:30,724: WARNING/MainProcess] ### 1 ###
BODY: {'TEST': 'OK'}
CONS: [<Consumer: [<Queue myqueue -> <Exchange myexchange(direct) bound to chan:5> -> mykey bound to chan:5>]>]
#########
^C
worker: Hitting Ctrl+C again will terminate all running tasks!
worker: Warm shutdown (MainProcess)
(venv) ➜ src git:(rabbitmq-experiment) ✗ celery -A settings worker -Q myqueue --hostname 1#%h
-------------- 1#gonczor v5.0.5 (singularity)
--- ***** -----
-- ******* ---- Linux-5.11.0-37-generic-x86_64-with-glibc2.31 2021-10-14 19:38:37
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: blacksheeplearns:0x7f037539dd30
- ** ---------- .> transport: amqp://admin:**#localhost:5672//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> myqueue exchange=myqueue(direct) key=myqueue
[2021-10-14 19:38:38,855: WARNING/MainProcess] /home/gonczor/Projects/Learn Web Dev/learn-web-dev/venv/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2021-10-14 19:39:00,574: WARNING/MainProcess] ### 2 ###
BODY: {'TEST': 'OK'}
CONS: [<Consumer: [<Queue myotherqueue -> <Exchange myexchange(direct) bound to chan:4> -> mykey bound to chan:4>]>]
#########
[2021-10-14 19:39:00,574: WARNING/MainProcess] ### 1 ###
BODY: {'TEST': 'OK'}
CONS: [<Consumer: [<Queue myqueue -> <Exchange myexchange(direct) bound to chan:5> -> mykey bound to chan:5>]>]
#########
So as you can see single consumer consumes messages from both queues. I've checked that this message is dispatched correctly by turning down the celery worker and I could confirm that the message appeared on 2 queues: myqueue and myotherqueue.
I'm running this as a part of Django project if this is important.
EDIT Maybe someone will find this helpful: when I spawned 2 workers I started getting this error message on random occasions. The routing is also random:
[2021-10-14 20:20:29,780: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
The full contents of the message body was: body: {'TEST': 'OK'} (14b)
{content_type:'application/json' content_encoding:'utf-8'
delivery_info:{'consumer_tag': 'None6', 'delivery_tag': 2, 'redelivered': False, 'exchange': 'myexchange', 'routing_key': 'mykey'} headers={}}

Simple periodic task in celery doesn't work but no errors

I'm new in Celery. I'm trying to properly configure Celery with my Django project. To test whether the celery works, I've created a periodic task which should print "periodic_task" each 2 seconds. Unfortunately it doesn't work but no error.
1 Installed rabbitmq
2 Project/project/celery.py
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
from django.conf import settings # noqa
app = Celery('project')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def myfunc():
print 'periodic_task'
#app.task(bind=True)
def debudeg_task(self):
print('Request: {0!r}'.format(self.request))
3 Project/project/__init__.py
from __future__ import absolute_import
from .celery import app as celery_app
4 Settings.py
INSTALLED_APPS = [
'djcelery',
...]
...
...
CELERYBEAT_SCHEDULE = {
'schedule-name': {
'task': 'project.celery.myfunc', # We are going to create a email_sending_method later in this post.
'schedule': timedelta(seconds=2),
},
}
And before python manage.py, I run celery -A project worker -l info
Still can't see any "periodic_task" printed in console every 2 seconds... Do you know what to do?
EDIT CELERY CONSOLE:
-------------- celery#Milwou_NB v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Windows-8-6.2.9200
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: dolava:0x33d1350
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. project.celery.debudeg_task
. project.celery.myfunc
EDIT:
After changing worker to beat, it seems to work. Something is happening each 2 seconds (changed to 5 seconds) but I can't see the results of the task. (I can put anything into the CELERYBEAT_SCHEDULE, even wrong path and it doesn't raises any error..)
I changed myfunc code to:
#app.task(bind=True)
def myfunc():
# notifications.send_message_to_admin('sdaa','dsadasdsa')
with open('text.txt','a') as f:
f.write('sa')
But I can't see text.txt anywhere.
> celery -A dolava beat -l info
celery beat v3.1.23 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://guest:**#localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2016-10-26 17:46:50,135: INFO/MainProcess] beat: Starting...
[2016-10-26 17:46:50,138: INFO/MainProcess] Writing entries...
[2016-10-26 17:46:51,433: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2016-10-26 17:46:51,433: INFO/MainProcess] Writing entries...
[2016-10-26 17:46:51,812: INFO/MainProcess] Scheduler: Sending due task schedule-name (dolava_app.tasks.myfunc)
[2016-10-26 17:46:51,864: INFO/MainProcess] Writing entries...
[2016-10-26 17:46:57,138: INFO/MainProcess] Scheduler: Sending due task schedule-name (dolava_app.tasks.myfunc)
[2016-10-26 17:47:02,230: INFO/MainProcess] Scheduler: Sending due task schedule-name (dolava_app.tasks.myfunc)
Try to run
$ celery -A project beat -l info

Celery send_task doesn't send tasks

I have a server running Celery with RabbitMQ. But when I try to send tasks using send_task, it just returns with an AsyncResult object.
But the actual task is not running (even though the workers and the queues are empty)
c = Celery("tasks", broker="amqp://guest#127.0.0.1//")
c.send_task("tasks.printing.test_print", (100), queue="print_queue", routing_key="printing.test_print")
My celery configuration is:
CELERY_QUEUES = (
Queue('default', routing_key='task.#'),
Queue('print_queue', routing_key='printing.#'),
)
CELERY_DEFAULT_EXCHANGE = 'tasks'
CELERY_ROUTES = {
'tasks.printing.test_print': {
'queue': 'print_queue',
'routing_key': 'printing.test_print',
}}
BROKER_URL = 'amqp://'
I execute only one worker:
celery -A celerymain worker --loglevel=debug
This is it's initial log:
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7eff96903b50
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: amqp://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues] -------------- .> default exchange=tasks(topic) key=task.#
.> print_queue exchange=tasks(topic) key=printing.#
[tasks] . test_print
This is the task:
class test_print(Task):
name = "test_print"
def run(self,a):
log.info("running")
print a
The rabbitMQ queue 'print_queue' stays empty and there is nothing new in the rabbitMQ logs.
I have 4 GB free space so it's not a disk space problem.
What can be the problem here?
I solved the problem by removing the routing_key parameter from send_task.
I don't really know why this was a problem but at least it works
#app.task(name="test_print")
class test_print(Task):
name = "test_print"
def run(self,a):
log.info("running")
print a

Categories

Resources