Own params to PeriodicTask run() method in Celery - python

I am writing a small Django application and I should be able to create
for each model object its periodical task which will be executed with
a certain interval. I'm use for this a Celery application, but i can't understand one thing:
class ProcessQueryTask(PeriodicTask):
run_every = timedelta(minutes=1)
def run(self, query_task_pk, **kwargs):
logging.info('Process celery task for QueryTask %d' %
query_task_pk)
task = QueryTask.objects.get(pk=query_task_pk)
task.exec_task()
return True
Then i'm do following:
>>> from tasks.tasks import ProcessQueryTask
>>> result1 = ProcessQueryTask.delay(query_task_pk=1)
>>> result2 = ProcessQueryTask.delay(query_task_pk=2)
First call is success, but other periodical calls returning the error
- TypeError: run() takes exactly 2 non-keyword arguments (1 given) in
celeryd server.
Can I pass own params to PeriodicTask run()?

This was answered wonderfully by Ask Solem in his response to your question on the celery-users Google group.
Periodic tasks doesn't use arguments, so you need to make several
classes or make a periodic task that processes more than one "model".
E.g.:
from celery.task import PeriodicTask
from celery.decorators import periodic_task
# base class
class BaseProcessQueryTask(PeriodicTask):
abstract = True
run_every = timedelta(minutes=1)
query_task_pk = None
def run(self):
task = QueryTask.objects.get(pk=self.query_task_pk)
task.exec_task()
class ProcessQueryTask1(BaseProcessQueryTask):
query_task_pk = 1
class ProcessQueryTask2(BaseProcessQueryTask):
query_task_pk = 2
but it's more likely you want something like this:
#task(ignore_result=True)
def execute_query_task(task):
task.exec_task()
#periodic_task(run_every=timedelta(minutes=1))
def process_query_tasks():
for task in QueryTask.objects.all():
ExecuteQueryTask.delay(task)

Related

Celery Flower - tasks not shown when I define tasks by custom task classes

now I'm using celery and flower for async jobs.
when I define tasks.py like this:
import os
import time
from celery import Celery, Task
celery = Celery(__name__)
celery.conf.broker_url = os.environ.get("CELERY_BROKER_URL", "redis://localhost:6379")
celery.conf.result_backend = os.environ.get("CELERY_RESULT_BACKEND", "redis://localhost:6379")
#celery.task(name="create_task")
def create_task(task_type):
time.sleep(int(task_type) * 10)
return True
the executed tasks are shown on ${flower host}/tasks, but
when I define create_task() like this, executed tasks aren't shown on ${flower host}/tasks.
class MyTask(Task):
def run(self, task_type):
time.sleep(int(task_type) * 10)
return True
create_task = celery.register_task(MyTask())
both of them are executed tasks successfully, and I can see the number of executed task from here:
and as I can see from the documentation, the definition of tasks are fine.
https://docs.celeryq.dev/en/stable/userguide/tasks.html#custom-task-classes
what's the difference?
I found a reason,
when I execute a task, I need to specify the name of args (in this case, task_type) explicitly.
task = create_task.delay(task_type=int(task_type))
sorry for my TED Talk.

Celery with continuous deployment

I have a service that exposes an API which is then feeding tasks, it is implemented with Falcon (API) and Celery (task management).
Specifically, my workers take long time to load and their code looks something like this
class HeavyOp(celery.Task):
def __init__(self):
self._asset = get_heavy_asset() # <-- takes long time
#property
def asset(self):
return self._asset
#app.task(base=HeavyOp)
def my_task(data):
return my_task.asset.do_something(data)
What actually goes on is that in the __init__ function some object is being read from disk and held in memory for as long as the worker lives.
Sometimes, I want to update that object.
Is there a way to reload the worker, without downtime? As this is all behind an API, I don't wish to have those few minutes of loading the heavy object as downtime.
We can assume the host has more than 1 core, but the solution must be a single host solution.
I don't think you need a custom base task class. What you want to achieve is a single instance asset class which gets loaded after the worker has initialised and you can reload from a task.
This approach works:
# worker.py
import os
import sys
import time
from celery import Celery
from celery.signals import worker_ready
app = Celery(include=('tasks',))
class Asset:
def __init__(self):
self.time = time.time()
class AssetLoader:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
if '_value' not in self.__dict__:
self.get_heavy_asset()
def get_heavy_asset(self):
self._value = Asset()
#property
def value(self):
return self._value
#worker_ready.connect
def after_worker_ready(sender, **kwargs):
AssetLoader()
Here, I made AssetLoader a Borg class, but you can choose any other pattern/strategy to share a single instance of Asset. For illustrative purposes, I just capture the timestamp when executing get_heavy_asset.
# tasks.py
from worker import app, AssetLoader
#app.task(bind=True)
def load(self):
AssetLoader().get_heavy_asset()
return AssetLoader().value.time
#app.task(bind=True)
def my_task(self):
return AssetLoader().value.time
Bear in mind that Asset is shared per worker process but not across workers. If you run with concurrency=1, it doesn't make a difference, but for anything else it does. But from what I gather in your use case, it should be fine either way.

Django - run a function every x seconds

I'm working on a Django app. I have an API endpoint, which if requested, must carry out a function that must be repeated a few times (until a certain condition is true). How I'm dealing with it right now is -
def shut_down(request):
# Do some stuff
while True:
result = some_fn()
if result:
break
time.sleep(2)
return True
While I know that this is a terrible approach and that I shouldn't be blocking for 2 seconds, I can't figure out how to get around it.
This works, after say a wait of 4 seconds. But I'd like something that keeps the loop running in the background, and stop once some_fn returns True. (Also, it is certain that some_fn will return True)
EDIT -
Reading Oz123's response gave me an idea which seems to work. Here's what I did -
def shut_down(params):
# Do some stuff
# Offload the blocking job to a new thread
t = threading.Thread(target=some_fn, args=(id, ), kwargs={})
t.setDaemon(True)
t.start()
return True
def some_fn(id):
while True:
# Do the job, get result in res
# If the job is done, return. Or sleep the thread for 2 seconds before trying again.
if res:
return
else:
time.sleep(2)
This does the job for me. It's simple but I don't know how efficient multithreading is in conjunction with Django.
If anyone can point out pitfalls of this, criticism is appreciated.
For many small projects celery is overkill. For those projects you can use schedule, it's very easy to use.
With this library you can make any function execute a task periodically:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
while True:
schedule.run_pending()
time.sleep(1)
The example runs in a blocking manner, but if you look in the FAQ, you will find that you can also run tasks in a parallel thread, such that you are not blocking, and remove the task once not needed anymore:
import threading
import time
from schedule import Scheduler
def run_continuously(self, interval=1):
"""Continuously run, while executing pending jobs at each elapsed
time interval.
#return cease_continuous_run: threading.Event which can be set to
cease continuous run.
Please note that it is *intended behavior that run_continuously()
does not run missed jobs*. For example, if you've registered a job
that should run every minute and you set a continuous run interval
of one hour then your job won't be run 60 times at each interval but
only once.
"""
cease_continuous_run = threading.Event()
class ScheduleThread(threading.Thread):
#classmethod
def run(cls):
while not cease_continuous_run.is_set():
self.run_pending()
time.sleep(interval)
continuous_thread = ScheduleThread()
continuous_thread.setDaemon(True)
continuous_thread.start()
return cease_continuous_run
Scheduler.run_continuously = run_continuously
Here is an example for usage in a class method:
def foo(self):
...
if some_condition():
return schedule.CancelJob # a job can dequeue it
# can be put in __enter__ or __init__
self._job_stop = self.scheduler.run_continuously()
logger.debug("doing foo"...)
self.foo() # call foo
self.scheduler.every(5).seconds.do(
self.foo) # schedule foo for running every 5 seconds
...
# later on foo is not needed any more:
self._job_stop.set()
...
def __exit__(self, exec_type, exc_value, traceback):
# if the jobs are not stop, you can stop them
self._job_stop.set()
This answer expands on Oz123's answer a little bit.
In order to get things working, I created a file called mainapp/jobs.py to contain my scheduled jobs. Then, in my apps.py module, I put from . import jobs in the ready method. Here's my entire apps.py file:
from django.apps import AppConfig
import os
class MainappConfig(AppConfig):
name = 'mainapp'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.start_scheduler()
(The RUN_MAIN check is because python manage.py runserver runs the ready method twice—once in each of two processes—but we only want to run it once.)
Now, here's what I put in my jobs.py file. First, the imports. You'll need to import Scheduler, threading and time as below. The F and UserHolding imports are just for what my job does; you won't import these.
from django.db.models import F
from schedule import Scheduler
import threading
import time
from .models import UserHolding
Next, write the function you want to schedule. The following is purely an example; your function won't look anything like this.
def give_admin_gold():
admin_gold_holding = (UserHolding.objects
.filter(inventory__user__username='admin', commodity__name='gold'))
admin_gold_holding.update(amount=F('amount') + 1)
Next, monkey-patch the schedule module by adding a run_continuously method to its Scheduler class. Do this by using the below code, which is copied verbatim from Oz123's answer.
def run_continuously(self, interval=1):
"""Continuously run, while executing pending jobs at each elapsed
time interval.
#return cease_continuous_run: threading.Event which can be set to
cease continuous run.
Please note that it is *intended behavior that run_continuously()
does not run missed jobs*. For example, if you've registered a job
that should run every minute and you set a continuous run interval
of one hour then your job won't be run 60 times at each interval but
only once.
"""
cease_continuous_run = threading.Event()
class ScheduleThread(threading.Thread):
#classmethod
def run(cls):
while not cease_continuous_run.is_set():
self.run_pending()
time.sleep(interval)
continuous_thread = ScheduleThread()
continuous_thread.setDaemon(True)
continuous_thread.start()
return cease_continuous_run
Scheduler.run_continuously = run_continuously
Finally, define a function to create a Scheduler object, wire up your job, and call the scheduler's run_continuously method.
def start_scheduler():
scheduler = Scheduler()
scheduler.every().second.do(give_admin_gold)
scheduler.run_continuously()
I recommend you use Celery's task management. You can refer this to set up this app (package if you're from javaScript background).
Once set, you can alter the code to:
#app.task
def check_shut_down():
if not some_fun():
# add task that'll run again after 2 secs
check_shut_down.delay((), countdown=3)
else:
# task completed; do something to notify yourself
return True
I can't comment on oz123's (https://stackoverflow.com/a/44897678/1108505) and Tanner Swett's (https://stackoverflow.com/a/60244694/5378866) excellent post, but as a final note I wanted to add that if you use Gunicorn and you have X number of workers, the section:
from django.apps import AppConfig
import os
class MainappConfig(AppConfig):
name = 'mainapp'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.start_scheduler()
will be executed that same number of times, launching X schedulers at the same time.
If we only want it to run only one instance (for example if you're going to create objects in the database), we would have to add in our gunicorn.conf.py file something like this:
def on_starting(server):
from app_project import jobs
jobs.start_scheduler()
And finally in the gunicorn call add the argument --preload
Here is my solution, with sources noted. This function will allow you to create a scheduler that you can start with your app, then add and subtract jobs at will. The check_interval variable allows you to trade-off between system resources and job execution timing.
from schedule import Scheduler
import threading
import warnings
import time
class RepeatTimer(threading.Timer):
"""Add repeated run of target to timer functionality. Source: https://stackoverflow.com/a/48741004/16466191"""
running: bool = False
def __init__(self, *args, **kwargs):
threading.Timer.__init__(self, *args, **kwargs)
def start(self) -> None:
"""Protect from running start method multiple times"""
if not self.running:
super(RepeatTimer, self).start()
self.running = True
else:
warnings.warn('Timer is already running, cannot be started again.')
def cancel(self) -> None:
"""Protect from running stop method multiple times"""
if self.running:
super(RepeatTimer, self).cancel()
self.running = False
else:
warnings.warn('Timer is already canceled, cannot be canceled again.')
def run(self):
"""Replace run method of timer to run continuously"""
while not self.finished.wait(self.interval):
self.function(*self.args, **self.kwargs)
class ThreadedScheduler(Scheduler, RepeatTimer):
"""Non-blocking scheduler. Advice taken from: https://stackoverflow.com/a/50465583/16466191"""
def __init__(
self,
run_pending_interval: float,
):
"""Initialize parent classes"""
Scheduler.__init__(self)
super(RepeatTimer, self).__init__(
interval=run_pending_interval,
function=self.run_pending,
)
def print_work(what_to_say: str):
print(what_to_say)
if __name__ == '__main__':
my_schedule = ThreadedScheduler(run_pending_interval=1)
job1 = my_schedule.every(1).seconds.do(print_work, what_to_say='Did_job1')
job2 = my_schedule.every(2).seconds.do(print_work, what_to_say='Did_job2')
my_schedule.cancel()
my_schedule.start()
time.sleep(7)
my_schedule.cancel_job(job1)
my_schedule.start()
time.sleep(7)
my_schedule.cancel()

is Celery Task initialized per each worker process, or once per app?

I have a heavy external library class which takes time to initialize and consumes a lot of memory. I want to create it once per task instance, at minimum.
class NlpTask(Task):
def __init__(self):
print('initializing NLP parser')
self._parser = nlplib.Parser()
print('done initializing NLP parser')
#property
def parser(self):
return self._parser
#celery.task(base=NlpTask)
def my_task(arg):
x = my_task.parser.process(arg)
# etc.
Celery starts 32 worker processes, so I'd expect the printing "initializing ... done" 32 times, as I assume that a task instance is created per each worker. Surprisingly, I'm getting the printing once. What actually happens there? Thanks.
Your NlpTask is initializing once when it is getting registered with the worker.
If you have two tasks like
#celery.task(base=NlpTask)
def foo(arg):
pass
#celery.task(base=NlpTask)
def bar(arg):
pass
Then when you start a worker, you will see 2 initializations.
If you want to initialize it once for every worker, you can use worker_process_init signal.
from celery.signals import worker_process_init
#worker_process_init.connect()
def setup(**kwargs):
print('initializing NLP parser')
# setup
print('done initializing NLP parser')
Now, when you start a worker, you will see setup is being called by each process once.
for this:
that's my point - I'd expect once per worker, and it seems like once per celery instance. I edited the question – #davka
the answer must be use a sender filter in connect, like:
#worker_process_init.connect(sender='xx')
def func(sender, **kwargs):
if sender == 'xx':
# dosomething
but I found that it's not working in celery 4.0.2.

Celery: update state of task when using classes

Hi i'm trying to update the state of a method which is executed as a task:
As described in : http://docs.celeryproject.org/en/latest/reference/celery.contrib.methods.html
from celery import Celery
celery = Celery()
class A(object):
def __init__(self):
self.a = 0
#celery.task(filter=task_method)
def add(self):
self.a += 10
for i in range(10):
self.update_state(state="PROGRESS", meta={
"current": i, "total": 10, "status": "Sleeping"
})
return {"current": 100, "total": 100, "status": "Complete."}
a = A()
a.add.delay()
Which gives an error:
AttributeError: 'A' object has no attribute 'update_state'
Which seems logical to me since A does not inherit from task, so it hasn't got the "update_task" method.
Question: How do i update the state of an task when using method based tasks ???
Update:
As described in the comments below, updating the status of a task which is
not bound is impossible, therefore the celery.contrib.methods way of defining methods as tasks is not usable in my example.
Probably you can do it like that:
from celery import Celery, current_task
celery = Celery()
class A:
#celery.task(filter=task_method)
def add(self):
# ...
current_task.update_state('PROGRESS', meta={...})
a = A()
a.add.delay()
Notice that I use current_task proxy instead of the self variable (which, in contrary to class methods, denotes current task in bound Celery tasks).
Alternatively (didn't check that but probably it should work as well), you may be able to bind class method task as well:
class A:
#celery.task(filter=task_method, bind=True)
def add(self, task):
task.update_state('PROGRESS', meta={...})
Probably you'll have to exchange self and task arguments for them to work properly, not sure about that.
BTW, it seems that celery.contrib.methods was removed in Celery 4.0.

Categories

Resources