Celery periodic task not periodic - python

I tried to create a task that should run every minute in celery along with redis server
To execute redis I ran "redis-server"
To execute celery I ran "celery -A tasks worker --loglevel=info"
This is my tasks.py file
from celery import Celery
from celery.schedules import crontab
from celery.task import periodic_task
app = Celery('tasks', backend='redis://localhost', broker='redis://localhost')
#app.task
def add(x, y):
return x + y
#periodic_task(run_every=(crontab(minute='1')),name="run_every_minute",ignore_result=True)
def run_every_minute():
print("hehe")
return "ok"
When I ran in python console
from tasks.py import run_every_minute
z=run_every_minute.delay()
I got output at celery running terminal as
[2019-06-05 01:35:02,591: INFO/MainProcess] Received task: run_every_minute[06498b4b-1d13-45af-b91c-fb10476e0aa3]
[2019-06-05 01:35:02,595: WARNING/Worker-2] hehe
[2019-06-05 01:35:02,599: INFO/MainProcess] Task run_every_minute[06498b4b-1d13-45af-b91c-fb10476e0aa3] succeeded in
0.004713802001788281s: 'ok'
But this should execute every minute since its a periodic task. How this can happen.
Also, how can we execute a celery task at some specific time say 5:30 GMT(for example).

Ok, based on the commentary
First periodic_task needs the scheduler/beat be started (Periodic Tasks), with this the scheduler will send the task depending in the run_every parameter
celery -A tasks beat
Next, if you need to send the beat every minute, you need the crontab be like this
#periodic_task(run_every=(crontab(minute='*')),name="run_every_minute",ignore_result=True)
def run_every_minute():
print("hehe")
return "ok"
With minute='*', it will send the task every minute. minute=1 will send the task at every hour in the minute one

Answering your last comment:
run_every=(crontab(minute='1'))
You have specified 'minute of hour' = 1, so celery beat runs your periodic task every hour at minute '1', e.g. 00:01, 01:01 and so on.
You should set hour attribute for your crontab, propably as a range

Related

Why does this Celery "hello world" loop forever?

Consider the code:
from celery import Celery, group
from time import time
app = Celery('tasks', broker='redis:///0', backend='redis:///1', task_ignore_result=False)
#app.task
def test_task(i):
print('hi')
return i
x = test_task.delay(3)
print(x.get())
I run it by calling python script.py, but I'm getting no results. Why?
You don't get any results because you've asked your celery app to execute a task without starting a worker process to do the work executing it. The process you did start is blocked on the call to get().
First things first, when using celery it is critical that you do not have tasks get executed when a module is imported, so let's put your task execution inside of a main() function, and put it in a file called celery_test.py.
from celery import Celery, group
from time import time
app = Celery('tasks', broker='redis:///0', backend='redis:///1', task_ignore_result=False)
#app.task
def test_task(i):
print('hi')
return i
def main():
x = test_task.delay(3)
print(x.get())
if __name__ == '__main__':
main()
Now let's start a pool of celery workers to execute tasks for this app. You can do this by opening a new terminal and executing the following.
celery worker -A celery_test --loglevel=INFO
The -A flag refers to the module where celery will find an application to add workers to. You should see some output in the terminal to indicate that the the celery worker is running and ready for tasks to process.
Now, try executing your script again with python celery_test.py. You should see hi show up in the worker's log output, but the the value 3 returned in the script that called get().
Be warned, if you've been playing with celery without running a worker, it probably has lots of tasks waiting in your broker to execute. The first time you start up the worker pool, you'll see them all execute in parallel until the broker runs out of tasks.

django celery only calls 1 of 2 apply_async task

I need to call the following 2 apply_async tasks:
escalate.apply_async((e.id), countdown=3)
escalate.apply_async((e.id), countdown=3)
My tasks implementation looks like:
#app.task
def escalate(id, group):
escalation_email, created = EscalationEmail.objects.get_or_create()
escalation_email.send()
return 'sup email sent'
I run the work with the following command:
celery -A proj worker -l info --concurrency=10
The problem is that when I look at the worker, only 1 tasks is received and then only 1 succeeds. Also, only 1 email sends.
It seems that most of the time the second escalate task runs.
How can I ensure that these tasks both fire 100% of the time with reliability?
The problem was that I did not choose a queue to associate the task with.

periodic task using celery to delete a queryset result

I'm trying to execute a periodic task using celery to delete users who didn't activate their account in time. The screenshot bellow shows that the task is correctly discovered and executed, but when i check the database no changes are done.
The celery task :
#tasks.py
from celery.task.schedules import crontab
from celery.decorators import periodic_task
from celery.utils.log import get_task_logger
from .utils import unconfirmed_users_delete
logger = get_task_logger(__name__)
# A periodic task that will run every minute (the symbol "*" means every)
#periodic_task(run_every=(crontab(hour="*", minute="*", day_of_week="*")))
def delete_unconfirmed_users():
return unconfirmed_users_delete()
The queryset to execute (checked in django shell and correctly working) :
#utils.py
from django.contrib.auth.models import User
from django.utils import timezone
def unconfirmed_users_delete():
return User.objects.filter(is_active=False).filter(profile__key_expires__lt=timezone.now()).delete()
The task is correctly called every minute :
What could be wrong ?
As #schillingt mentioned most of the time, we forget to (re)start worker process for the periodic task.
This happens because we have a beat scheduler which schedules the task and worker which executes the task.
celery -A my_task beat # schedule tasks
celery worker -A my_task -l info # consume tasks
A much better solution is to have a worker which schedules task & executes. You can do that using
celery worker -A my_task -l info --beat # schedule & consume tasks
This schedules the periodic task and consumes it.

Celery : starting PeriodicTask after starting worker

I'm working with Celery http://celery.readthedocs.org/en/latest/index.html
I need to run a periodic tasks at a specific moment. But I only want to start my task after starting the celery worker.
For that I'm trying to create my own "PeriodicTask". But I'm dealing with a problem.
When I'm starting the worker and executing the run_tasks.py in another terminal, it seems that my periodic tasks is executed only one time.
How could I do to have my periodic task running every 3 seconds.
Here is a part of the code.
Start celery :
celery worker --app=worker_manager.celery --loglevel=info
file tasks.py
class MyPeriodicTask(PeriodicTask):
name = "periodic-task"
run_every = timedelta(seconds=3)
def run(self, **kwargs):
logger = self.get_logger(**kwargs)
logger.info("Running periodic task!")
file run_tasks.py
tasks.register(MyPeriodicTask)
wmi_collector_task = worker_app.tasks[MyPeriodicTask.name]
Thanks in advance.
To run periodic tasks you need to start celery beat. You can do this by passing -B argument when starting workers:
celery worker -B --app=worker_manager.celery --loglevel=info

Django celery custom periodic task not executing

I tried using this code to try to dynamically add / remove scheduled tasks.
My tasks.py file looks like this:
from celery.decorators import task
import logging
log = logging.getLogger(__name__)
#task
def mytask():
log.debug("Executing task")
return
The problem is that the tasks do not actually execute (i.e there is no log output), but I get the following messages in my celery log file, exactly on schedule:
[2013-05-10 04:53:00,005: INFO/MainProcess] Got task from broker: cron.tasks.mytask[dfcf397b-e30b-45bd-9f5f-11a17a51b6c4]
[2013-05-10 04:54:00,007: INFO/MainProcess] Got task from broker: cron.tasks.mytask[f013b3cd-6a0f-4060-8bcc-3bb51ffaf092]
[2013-05-10 04:55:00,007: INFO/MainProcess] Got task from broker: cron.tasks.mytask[dfc0d563-ff4b-4132-955a-4293dd3a9ac7]
[2013-05-10 04:56:00,012: INFO/MainProcess] Got task from broker: cron.tasks.mytask[ba093535-0d70-4dc5-89e4-441b72cfb61f]
I can definitely confirm that the logger is configured correctly and working fine. If I were to try and call result = mytask.delay() in the interactive shell, result.state will indefinitely contain the state PENDING.
EDIT: See also Django Celery Periodic Tasks Run But RabbitMQ Queues Aren't Consumed

Categories

Resources