Django model creation with multithreading - python

My Django app requires a background thread which executes every 60 seconds and reads entries from a model called "rawdata" then runs some calculations based on the data and generates a new record in another model called "results".
After doing some research I realize this can be done with celery however I feel a full async task framework is a bit of overkill for what I need. I am new to Django and am trying to determine if my implementation below is safe.
My app launches the background process using the "ready" hook of Django like this:
class Myapp(AppConfig):
name = 'Myapp'
ready_done = False
def ready(self):
if os.environ.get('RUN_MAIN') and Myapp.ready_done==False:
from scripts.recordgen import RecordGenerator
Myapp.ready_done=True
Myapp.record_generator = RecordGenerator()
In the app directory I have a /scripts/recordgen.py which spawns a basic independent thread adding new data to the database every 60 seconds. Something like this:
import threading
import time
from Myapp.models import results,rawdata
class RecordGenerator(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
def run(self):
while (1):
time.sleep(60)
#Query the rawdata here and do some calculations
#Just filling with dummy data for this example
dummydata = "dummy data"
new_record = results()
new_record.data = dummydata
new_record.save()
If records are only added to the results table in this thread, and only read, updated and deleted through Django views, will this be a thread safe implementation? The database I'm using is Postgre.

Related

Function running in background all the time (and startup itself) in Django app

I create simple Django app. Inside this app I have single checkbox. I save this checkbox state to database if it's checked in database I have True value if checkbox is uncecked I have False value. There is no problem with this part. Now I created function that prints for me every 10 second all the time this checkbox state value from database.
Function I put into views.py file and it looks like:
def get_value():
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
The point is that function should work all the time. For example If I in models.py code checkbox = models.BooleanField(default=False) after I run command python manage.py runserver it should give me output like:
Performing system checks...
System check identified no issues (0 silenced).
January 04, 2019 - 09:19:47
Django version 2.1.3, using settings 'CMS.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
true
true
true
true
then if I visit website and change state is should print false this is obvious. But as you notice problem is how start this method. It should work all the time even if I don't visit the website yet. And this part confuse me. How to do this properly ?
I need to admit that I tried some solutions
put this function at the end of manage.py file,
put this function into def ready(self),
create middleware class and put method here (example code below).
But this solutions doesn't work.
middleware class :
class SimpleMiddleware:
def __init__(self, get_response):
self.get_response = get_response
get_value()
You can achieve this by using the AppConfig.ready() hook and combining it with a sub-process/thread.
Here is an example apps.py file (based on the tutorial Polls app):
import time
from multiprocessing import Process
from django.apps import AppConfig
from django import db
class TurnOnOffMonitor(Process):
def __init__(self):
super().__init__()
self.daemon = True
def run(self):
# This import needs to be delayed. It needs to happen after apps are
# loaded so we put it into the method here (it won't work as top-level
# import)
from .models import TurnOnOff
# Because this is a subprocess, we must ensure that we get new
# connections dedicated to this process to avoid interfering with the
# main connections. Closing any existing connection *should* ensure
# this.
db.connections.close_all()
# We can do an endless loop here because we flagged the process as
# being a "daemon". This ensures it will exit when the parent exists
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
class PollsConfig(AppConfig):
name = 'polls'
def ready(self):
monitor = TurnOnOffMonitor()
monitor.start()
Celery is the thing that best suits your needs from what you've described.
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready).
You need to create task, run it periodically, call if you want to manually trigger is (in some view/controller).
NOTE: do not use time.sleep(10)

Celery with continuous deployment

I have a service that exposes an API which is then feeding tasks, it is implemented with Falcon (API) and Celery (task management).
Specifically, my workers take long time to load and their code looks something like this
class HeavyOp(celery.Task):
def __init__(self):
self._asset = get_heavy_asset() # <-- takes long time
#property
def asset(self):
return self._asset
#app.task(base=HeavyOp)
def my_task(data):
return my_task.asset.do_something(data)
What actually goes on is that in the __init__ function some object is being read from disk and held in memory for as long as the worker lives.
Sometimes, I want to update that object.
Is there a way to reload the worker, without downtime? As this is all behind an API, I don't wish to have those few minutes of loading the heavy object as downtime.
We can assume the host has more than 1 core, but the solution must be a single host solution.
I don't think you need a custom base task class. What you want to achieve is a single instance asset class which gets loaded after the worker has initialised and you can reload from a task.
This approach works:
# worker.py
import os
import sys
import time
from celery import Celery
from celery.signals import worker_ready
app = Celery(include=('tasks',))
class Asset:
def __init__(self):
self.time = time.time()
class AssetLoader:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
if '_value' not in self.__dict__:
self.get_heavy_asset()
def get_heavy_asset(self):
self._value = Asset()
#property
def value(self):
return self._value
#worker_ready.connect
def after_worker_ready(sender, **kwargs):
AssetLoader()
Here, I made AssetLoader a Borg class, but you can choose any other pattern/strategy to share a single instance of Asset. For illustrative purposes, I just capture the timestamp when executing get_heavy_asset.
# tasks.py
from worker import app, AssetLoader
#app.task(bind=True)
def load(self):
AssetLoader().get_heavy_asset()
return AssetLoader().value.time
#app.task(bind=True)
def my_task(self):
return AssetLoader().value.time
Bear in mind that Asset is shared per worker process but not across workers. If you run with concurrency=1, it doesn't make a difference, but for anything else it does. But from what I gather in your use case, it should be fine either way.

Queue object shared between threads and objects in separate modules

I am trying to build a system that will crawl a remote server continuously and download new files locally. I want crawling and downloading to be split into separate objects, and separate threads. To keep track of files found on the server and which still needs to be downloaded, I will use a PriorityQueue.
Because the system will be larger later with more tasks added, I will need a main module that sits on top and I need to initiate the PriorityQueue in the main module. But I have a problem on how to share this PriorityQueue between the main module and the crawler.
Here is the code below, ignoring the download part for now as it doesn't play into the problem yet, as I can't figure out how to make the Crawler object "see" the queue object created in main.py.
main.py
import crawler
import threading
from queue import PriorityQueue
class Watchdog(object):
def __init__(self):
self.queue = PriorityQueue
def setup(self):
self.crawler = crawler.Crawler()
def run(self):
t1 = threading.Thread(target=self.crawler.crawl(), args=(<pysftp connection>,))
t1.daemon=True
t2.daemon=True
t1.start()
t2.start()
t1.join()
t2.join()
crawler.py
import pysftp
class Crawler(object):
def __init__(self, connection):
self.connection = connection
def crawl(self):
callbacks = pysftp.WTCallbacks()
self.connection.walktree(rootdir, fcallback=callbacks.file_cb, dcallback=callbacks.dir_cb, ucallback=callbacks.unk_cb)
for fpath in callbacks.flist:
with queue.mutex:
if not fpath in queue.queue:
queue.put(os.path.getmtime(fpath), fpath)
The problem is that I can not figure out how I can make the queue object I create in main.py to be reachable and shared in crawler.py. When I add the download task, it should also be able to see the queue object, and I need the queue to be synced over all modules, so that when a new file is added by crawler, the downloader will immediately see that.
You need to use multiprocessing module. Check the doc at https://docs.python.org/3.6/library/multiprocessing.html#exchanging-objects-between-processes
You could also use celery

Django - run a function every x seconds

I'm working on a Django app. I have an API endpoint, which if requested, must carry out a function that must be repeated a few times (until a certain condition is true). How I'm dealing with it right now is -
def shut_down(request):
# Do some stuff
while True:
result = some_fn()
if result:
break
time.sleep(2)
return True
While I know that this is a terrible approach and that I shouldn't be blocking for 2 seconds, I can't figure out how to get around it.
This works, after say a wait of 4 seconds. But I'd like something that keeps the loop running in the background, and stop once some_fn returns True. (Also, it is certain that some_fn will return True)
EDIT -
Reading Oz123's response gave me an idea which seems to work. Here's what I did -
def shut_down(params):
# Do some stuff
# Offload the blocking job to a new thread
t = threading.Thread(target=some_fn, args=(id, ), kwargs={})
t.setDaemon(True)
t.start()
return True
def some_fn(id):
while True:
# Do the job, get result in res
# If the job is done, return. Or sleep the thread for 2 seconds before trying again.
if res:
return
else:
time.sleep(2)
This does the job for me. It's simple but I don't know how efficient multithreading is in conjunction with Django.
If anyone can point out pitfalls of this, criticism is appreciated.
For many small projects celery is overkill. For those projects you can use schedule, it's very easy to use.
With this library you can make any function execute a task periodically:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
while True:
schedule.run_pending()
time.sleep(1)
The example runs in a blocking manner, but if you look in the FAQ, you will find that you can also run tasks in a parallel thread, such that you are not blocking, and remove the task once not needed anymore:
import threading
import time
from schedule import Scheduler
def run_continuously(self, interval=1):
"""Continuously run, while executing pending jobs at each elapsed
time interval.
#return cease_continuous_run: threading.Event which can be set to
cease continuous run.
Please note that it is *intended behavior that run_continuously()
does not run missed jobs*. For example, if you've registered a job
that should run every minute and you set a continuous run interval
of one hour then your job won't be run 60 times at each interval but
only once.
"""
cease_continuous_run = threading.Event()
class ScheduleThread(threading.Thread):
#classmethod
def run(cls):
while not cease_continuous_run.is_set():
self.run_pending()
time.sleep(interval)
continuous_thread = ScheduleThread()
continuous_thread.setDaemon(True)
continuous_thread.start()
return cease_continuous_run
Scheduler.run_continuously = run_continuously
Here is an example for usage in a class method:
def foo(self):
...
if some_condition():
return schedule.CancelJob # a job can dequeue it
# can be put in __enter__ or __init__
self._job_stop = self.scheduler.run_continuously()
logger.debug("doing foo"...)
self.foo() # call foo
self.scheduler.every(5).seconds.do(
self.foo) # schedule foo for running every 5 seconds
...
# later on foo is not needed any more:
self._job_stop.set()
...
def __exit__(self, exec_type, exc_value, traceback):
# if the jobs are not stop, you can stop them
self._job_stop.set()
This answer expands on Oz123's answer a little bit.
In order to get things working, I created a file called mainapp/jobs.py to contain my scheduled jobs. Then, in my apps.py module, I put from . import jobs in the ready method. Here's my entire apps.py file:
from django.apps import AppConfig
import os
class MainappConfig(AppConfig):
name = 'mainapp'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.start_scheduler()
(The RUN_MAIN check is because python manage.py runserver runs the ready method twice—once in each of two processes—but we only want to run it once.)
Now, here's what I put in my jobs.py file. First, the imports. You'll need to import Scheduler, threading and time as below. The F and UserHolding imports are just for what my job does; you won't import these.
from django.db.models import F
from schedule import Scheduler
import threading
import time
from .models import UserHolding
Next, write the function you want to schedule. The following is purely an example; your function won't look anything like this.
def give_admin_gold():
admin_gold_holding = (UserHolding.objects
.filter(inventory__user__username='admin', commodity__name='gold'))
admin_gold_holding.update(amount=F('amount') + 1)
Next, monkey-patch the schedule module by adding a run_continuously method to its Scheduler class. Do this by using the below code, which is copied verbatim from Oz123's answer.
def run_continuously(self, interval=1):
"""Continuously run, while executing pending jobs at each elapsed
time interval.
#return cease_continuous_run: threading.Event which can be set to
cease continuous run.
Please note that it is *intended behavior that run_continuously()
does not run missed jobs*. For example, if you've registered a job
that should run every minute and you set a continuous run interval
of one hour then your job won't be run 60 times at each interval but
only once.
"""
cease_continuous_run = threading.Event()
class ScheduleThread(threading.Thread):
#classmethod
def run(cls):
while not cease_continuous_run.is_set():
self.run_pending()
time.sleep(interval)
continuous_thread = ScheduleThread()
continuous_thread.setDaemon(True)
continuous_thread.start()
return cease_continuous_run
Scheduler.run_continuously = run_continuously
Finally, define a function to create a Scheduler object, wire up your job, and call the scheduler's run_continuously method.
def start_scheduler():
scheduler = Scheduler()
scheduler.every().second.do(give_admin_gold)
scheduler.run_continuously()
I recommend you use Celery's task management. You can refer this to set up this app (package if you're from javaScript background).
Once set, you can alter the code to:
#app.task
def check_shut_down():
if not some_fun():
# add task that'll run again after 2 secs
check_shut_down.delay((), countdown=3)
else:
# task completed; do something to notify yourself
return True
I can't comment on oz123's (https://stackoverflow.com/a/44897678/1108505) and Tanner Swett's (https://stackoverflow.com/a/60244694/5378866) excellent post, but as a final note I wanted to add that if you use Gunicorn and you have X number of workers, the section:
from django.apps import AppConfig
import os
class MainappConfig(AppConfig):
name = 'mainapp'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.start_scheduler()
will be executed that same number of times, launching X schedulers at the same time.
If we only want it to run only one instance (for example if you're going to create objects in the database), we would have to add in our gunicorn.conf.py file something like this:
def on_starting(server):
from app_project import jobs
jobs.start_scheduler()
And finally in the gunicorn call add the argument --preload
Here is my solution, with sources noted. This function will allow you to create a scheduler that you can start with your app, then add and subtract jobs at will. The check_interval variable allows you to trade-off between system resources and job execution timing.
from schedule import Scheduler
import threading
import warnings
import time
class RepeatTimer(threading.Timer):
"""Add repeated run of target to timer functionality. Source: https://stackoverflow.com/a/48741004/16466191"""
running: bool = False
def __init__(self, *args, **kwargs):
threading.Timer.__init__(self, *args, **kwargs)
def start(self) -> None:
"""Protect from running start method multiple times"""
if not self.running:
super(RepeatTimer, self).start()
self.running = True
else:
warnings.warn('Timer is already running, cannot be started again.')
def cancel(self) -> None:
"""Protect from running stop method multiple times"""
if self.running:
super(RepeatTimer, self).cancel()
self.running = False
else:
warnings.warn('Timer is already canceled, cannot be canceled again.')
def run(self):
"""Replace run method of timer to run continuously"""
while not self.finished.wait(self.interval):
self.function(*self.args, **self.kwargs)
class ThreadedScheduler(Scheduler, RepeatTimer):
"""Non-blocking scheduler. Advice taken from: https://stackoverflow.com/a/50465583/16466191"""
def __init__(
self,
run_pending_interval: float,
):
"""Initialize parent classes"""
Scheduler.__init__(self)
super(RepeatTimer, self).__init__(
interval=run_pending_interval,
function=self.run_pending,
)
def print_work(what_to_say: str):
print(what_to_say)
if __name__ == '__main__':
my_schedule = ThreadedScheduler(run_pending_interval=1)
job1 = my_schedule.every(1).seconds.do(print_work, what_to_say='Did_job1')
job2 = my_schedule.every(2).seconds.do(print_work, what_to_say='Did_job2')
my_schedule.cancel()
my_schedule.start()
time.sleep(7)
my_schedule.cancel_job(job1)
my_schedule.start()
time.sleep(7)
my_schedule.cancel()

is Celery Task initialized per each worker process, or once per app?

I have a heavy external library class which takes time to initialize and consumes a lot of memory. I want to create it once per task instance, at minimum.
class NlpTask(Task):
def __init__(self):
print('initializing NLP parser')
self._parser = nlplib.Parser()
print('done initializing NLP parser')
#property
def parser(self):
return self._parser
#celery.task(base=NlpTask)
def my_task(arg):
x = my_task.parser.process(arg)
# etc.
Celery starts 32 worker processes, so I'd expect the printing "initializing ... done" 32 times, as I assume that a task instance is created per each worker. Surprisingly, I'm getting the printing once. What actually happens there? Thanks.
Your NlpTask is initializing once when it is getting registered with the worker.
If you have two tasks like
#celery.task(base=NlpTask)
def foo(arg):
pass
#celery.task(base=NlpTask)
def bar(arg):
pass
Then when you start a worker, you will see 2 initializations.
If you want to initialize it once for every worker, you can use worker_process_init signal.
from celery.signals import worker_process_init
#worker_process_init.connect()
def setup(**kwargs):
print('initializing NLP parser')
# setup
print('done initializing NLP parser')
Now, when you start a worker, you will see setup is being called by each process once.
for this:
that's my point - I'd expect once per worker, and it seems like once per celery instance. I edited the question – #davka
the answer must be use a sender filter in connect, like:
#worker_process_init.connect(sender='xx')
def func(sender, **kwargs):
if sender == 'xx':
# dosomething
but I found that it's not working in celery 4.0.2.

Categories

Resources