Access process data from a django view - python

I'm quite new to python and Django and I'm struggling to understand what's the best way to tackle my problem, which is:
I have a view which has to activate a process/task/thread and return a success.
The process/task/thread operates a device and it will update its status based on the device inputs.
I then have another view which I will poll with ajax and this view shoule be able to query that background process/task/thread to know its status and return it to the caller.
I've read quite a few different options like multiprocessing, gevent, celery, session but I'm still can't get my head around it.
Tried with the session but obviously I don't have access to the request object from within the background task.
Didn't try gevent or celery just because I thought there would have been a easier solution without using any additional frameworks (don't really wanna install RabbitMQ etc...).
Tried the multiprocessing and that's the code:
def test_process(request):
manager = Manager()
d = manager.dict()
p = Process(target=test_function, args=(d, ))
p.daemon = True
p.start()
return HttpResponse(json.dumps('Ok'), content_type="application/json")
def test_function(d):
d['test'] = 'alex'
def test_manager(request):
manager = Manager()
data = manager.dict().get('test')
return HttpResponse(json.dumps(data), content_type="application/json")
After I wrote I realized that probably the dictionary is only shared by the background process and the process of the request that executed test_process and so test_manager gets and empty dictionary.
Dunno where to go from here
Any help ?
Cheers

To share data between a child and a parent process using the multiprocessing interface you may use one of the methods proposed in https://docs.python.org/2/library/multiprocessing.html, for instance a Queue or a Pipe.
Here's what you should do to use a queue to talk to a child process from within a Django web application (I suppose that the background/child process is controlling a single device for all users of the web application, so everybody will get the same results -- this could also be per session):
Create a global queue object inside your views.py like this global_q = Queue().
Create a view for initializing the process, and pass the global Queue to the process function:
def init_process(request):
p = Process(target=the_process, args=(global_q, ))
p.daemon = True
p.start()
return HttpResponse(json.dumps('Ok'), content_type="application/json")
Create a different view that will read from the global Queue:
def read_process_status(request):
data = global_q.get()
return HttpResponse(json.dumps(data), content_type="application/json")
Your process function handles the device and write events in the queue parameter when needed:
def the_process(local_q):
# do some things
local_q.put([6])
# do some other things
local_q.put([34])
For the above to work without problems you must check if th queue is empty or make it non-block etc, but you'll get the idea.

Related

How to assign python requests sessions for single processes in multiprocessing pool?

Considering the following code example:
import multiprocessing
import requests
session = requests.Session()
data_to_be_processed = [...]
def process(arg):
# do stuff with arg and get url
response = session.get(url)
# process response and generate data...
return data
with multiprocessing.Pool() as pool:
results = pool.map(process, data_to_be_processed)
In example, Session is assigned as global variable, therefore after creating processes in Pool it will be copied into each subprocess. I am not sure whether the session is thread safe nor how pooling in session works, so I would like to assign separate session object for each process in pool.
I am aware, that I could just use requests.get(url) instead of session.get(url), but I would like to work with session and I am also considering using requests-html (https://html.python-requests.org/).
I am not very familiar with python's multiprocessing, so far I have used just pool, because it came to me as best solution to process data in parallel without having a critical section, so I am open for different solutions.
Is there a way to do it clean and straightforward?
Short answer: you can use global namespace for sharing data between initializer and func:
import multiprocessing
import requests
session = None
data_to_be_processed = [...]
def init_process():
global session
session = requests.Session()
def process(arg):
global session
# do stuff with arg and get url
response = session.get(url)
# process response and generate data...
return data
with multiprocessing.Pool(initializer=init_process) as pool:
results = pool.map(process, data_to_be_processed)
Long answer:
Python uses one of three possible start methods. All of them separate memory objects between parent process and child processes. In our case that means changes in global namespace of processes run by Pool() will not propagate back to parent process, neither to sibling processes.
For object destruction we could rely to Garbage Collector, which steps in once child process finishes it's work. Absence of explicit closing method in multiprocessing.Pool() makes it impossible to use with objects which are not destructible by GC (like the Pool() itself - see warning here )
Judging from requests docs, it is perfectly ok to use requests.Session without explicit close() on it.

Stop a long-running action in web2py with multiprocessing

I have a web2py application that basically serves as a browser interface for a Python script. This script usually returns pretty quickly, but can occasionally take a long time. I want to provide a way for the user to stop the script's execution if it takes too long.
I am currently calling the function like this:
def myView(): # this function is called from ajax
session.model = myFunc() # myFunc is from a module which i have complete control over
return dict(model=session.model)
myFunc, when called with certain options, uses multiprocessing but still ends up taking a long time. I need some way to terminate the function, or at the very least the thread's children.
The first thing i tried was to run myFunc in a new process, and roll my own simple event system to kill it:
# in the controller
def myView():
p_conn, c_conn = multiprocessing.Pipe()
events = multiprocessing.Manager().dict()
proc = multiprocessing.Process(target=_fit, args=(options, events c_conn))
proc.start()
sleep(0.01)
session.events = events
proc.join()
session.model = p_conn.recv()
return dict(model=session.model)
def _fit(options, events pipe):
pipe.send(fitting.logistic_fit(options=options, events=events))
pipe.close()
def stop():
try:
session.events['kill']()
except SystemExit:
pass # because it raises that error intentionally
return dict()
# in the module
def kill():
print multiprocessing.active_children()
for p in multiprocessing.active_children():
p.terminate()
raise SystemExit
def myFunc(options, events):
events['kill'] = kill
I ran into a few major problems with this.
The session in stop() wasn't always the same as the session in myView(), so session.events was None.
Even when the session was the same, kill() wasn't properly killing the children.
The long-running function would hang the web2py thread, so stop() wasn't even processed until the function finished.
I considered not calling join() and using AJAX to pick up the result of the function at a later time, but I wasn't able to save the process object in session for later use. The pipe seemed to be able to be pickled, but then I had the problem with not being able to access the same session from another view.
How can I implement this feature?
For long running tasks, you are better off queuing them via the built-in scheduler. If you want to allow the user to manually stop a task that is taking too long, you can use the scheduler.stop_task(ref) method (where ref is the task id or uuid). Alternatively, when you queue a task, you can specify a timeout, so it will automatically stop if not completed within the timeout period.
You can do simple Ajax polling to notify the client when the task has completed (or implement something more sophisticated with websockets or SSE).

Reporting yielded results of long-running Celery task

Problem
I've segmented a long-running task into logical subtasks, so I can report the results of each subtask as it completes. However, I'm trying to report the results of a task that will effectively never complete (instead yielding values as it goes), and am struggling to do so with my existing solution.
Background
I'm building a web interface to some Python programs I've written. Users can submit jobs through web forms, then check back to see the job's progress.
Let's say I have two functions, each accessed via separate forms:
med_func: Takes ~1 minute to execute, results are passed off to render(), which produces additional data.
long_func: Returns a generator. Each yield takes on the order of 30 minutes, and should be reported to the user. There are so many yields, we can consider this iterator as infinite (terminating only when revoked).
Code, current implementation
With med_func, I report results as follows:
On form submission, I save an AsyncResult to a Django session:
task_result = med_func.apply_async([form], link=render.s())
request.session["task_result"] = task_result
The Django view for the results page accesses this AsyncResult. When a task has completed, results are saved into an object that is passed as context to a Django template.
def results(request):
""" Serve (possibly incomplete) results of a session's latest run. """
session = request.session
try: # Load most recent task
task_result = session["task_result"]
except KeyError: # Already cleared, or doesn't exist
if "results" not in session:
session["status"] = "No job submitted"
else: # Extract data from Asynchronous Tasks
session["status"] = task_result.status
if task_result.ready():
session["results"] = task_result.get()
render_task = task_result.children[0]
# Decorate with rendering results
session["render_status"] = render_task.status
if render_task.ready():
session["results"].render_output = render_task.get()
del(request.session["task_result"]) # Don't need any more
return render_to_response('results.html', request.session)
This solution only works when the function actually terminates. I can't chain together logical subtasks of long_func, because there are an unknown number of yields (each iteration of long_func's loop may not produce a result).
Question
Is there any sensible way to access yielded objects from an extremely long-running Celery task, so that they can be displayed before the generator is exhausted?
In order for Celery to know what the current state of the task is, it sets some metadata in whatever result backend you have. You can piggy-back on that to store other kinds of metadata.
def yielder():
for i in range(2**100):
yield i
#task
def report_progress():
for progress in yielder():
# set current progress on the task
report_progress.backend.mark_as_started(
report_progress.request.id,
progress=progress)
def view_function(request):
task_id = request.session['task_id']
task = AsyncResult(task_id)
progress = task.info['progress']
# do something with your current progress
I wouldn't throw a ton of data in there, but it works well for tracking the progress of a long-running task.
Paul's answer is great. As an alternative to using mark_as_started you can use Task's update_state method. They ultimately do the same thing, but the name "update_state" is a little more appropriate for what you're trying to do. You can optionally define a custom state that indicates your task is in progress (I've named my custom state 'PROGRESS'):
def yielder():
for i in range(2**100):
yield i
#task
def report_progress():
for progress in yielder():
# set current progress on the task
report_progress.update_state(state='PROGRESS', meta={'progress': progress})
def view_function(request):
task_id = request.session['task_id']
task = AsyncResult(task_id)
progress = task.info['progress']
# do something with your current progress
Celery part:
def long_func(*args, **kwargs):
i = 0
while True:
yield i
do_something_here(*args, **kwargs)
i += 1
#task()
def test_yield_task(task_id=None, **kwargs):
the_progress = 0
for the_progress in long_func(**kwargs):
cache.set('celery-task-%s' % task_id, the_progress)
Webclient side, starting task:
r = test_yield_task.apply_async()
request.session['task_id'] = r.task_id
Testing last yielded value:
v = cache.get('celery-task-%s' % session.get('task_id'))
if v:
do_someting()
If you do not like to use cache, or it's impossible, you can use db, file or any other place which celery worker and server side will have both accesss. With cache it's a simplest solution, but workers and server have to use the same cache.
A couple options to consider:
1 -- task groups. If you can enumerate all the sub tasks from the time of invocation, you can apply the group as a whole -- that returns a TaskSetResult object you can use to monitor the results of the group as a whole, or of individual tasks in the group -- query this as-needed when you need to check status.
2 -- callbacks. If you can't enumerate all sub tasks (or even if you can!) you can define a web hook / callback that's the last step in the task -- called when the rest of the task completes. The hook would be against a URI in your app that ingests the result and makes it available via DB or app-internal API.
Some combination of these could solve your challenge.
See also this great PyCon preso from one of the Instagram engineers.
http://blogs.vmware.com/vfabric/2013/04/how-instagram-feeds-work-celery-and-rabbitmq.html
At video mark 16:00, he discusses how they structure long lists of sub-tasks.

Sending signal to long running method in Django

I want to send a "pause" signal to a long running task in Celery and I'm trying to figure out the best way to do it. In the view I can pull an instance of the object from the database and tell that to save, but it's not the same as the instance of the object in Celery. The object doesn't check back to see if it's paused.
Polling the database from within the long-running class and task feels weird and impractical so I'm looking at sending my instance a message. I looked at using pubsub but I would prefer to use Django signals as it's already a Django project. I might be approaching this the wrong way.
Here's an example that does not work:
Models.py
class LongRunningClass(models.Model):
is_paused = models.BooleanField(default=False)
processed_files = models.IntegerField(default=0)
total_files = models.IntegerField(default=100)
def long_task(self):
remaining_files = self.total_files - self.processed_files
for i in xrange(remaining_files):
if not self.is_paused:
self.processed_files += 1
time.sleep(1)
# Task complete, let's save.
self.save()
Views.py
def pause_task(self, pk):
lrc = LongRunningClass.objects.get(pk=pk)
lrc.is_paused = True
lrc.save()
return HttpResponse(json.dumps({'is_paused': lrc.is_paused}))
def resume_task(self, pk):
lrc = LongRunningClass.objects.get(pk=pk)
lrc.is_paused = False
lrc.save()
# Pretend this is a Celery task
lrc.long_task()
So if I modify models.py to use signals, I can add these lines but it still does not quite work:
pause_signal = django.dispatch.Signal(providing_args=['is_paused'])
#django.dispatch.receiver(pause_signal)
def pause_callback(sender, **kwargs):
if 'is_paused' in kwargs:
sender.is_paused = kwargs['is_paused']
sender.save()
That doesn't affect the instantiated class that's already running either. How can I tell the instance of my model running within the task to pause?
Celery task is a separate process. Django signals is standard "observer" pattern, which works within one thread, so there is no way to orginize communication betwean threads using signals. You need to load object from database to know if its properties has changed.
class LongRunningClass(models.Model):
is_paused = models.BooleanField(default=False)
processed_files = models.IntegerField(default=0)
total_files = models.IntegerField(default=100)
def get_is_paused(self):
db_obj = LongRunningClass.objects.get(pk=self.pk)
return db_obj.is_paused
def long_task(self):
remaining_files = self.total_files - self.processed_files
for i in xrange(remaining_files):
if not self.get_is_paused:
self.processed_files += 1
time.sleep(1)
# Task complete, let's save.
self.save()
Not very good by design - you better to move long_task to other place, and operate with newly loaded LongRunningClass instance, but it will do the job. You could add some memcache here - if you don't want to disturb your database so often.
BTW: I'm not 100% sure but you may have another design issue here. This is rather rare case when you have really long running tasks with this kind of cycle. Think about removing cycle from your program (you have queues!). Take a look:
#celery.task(run_every=2minutes) # adding XX files for processing every XX minutes
def scheduled_task(lr_pk):
lr = LongRunningClass.objects.get(pk=lr_pk)
if not lr.is paused:
remaining_files = self.total_files - self.processed_files
for i in xrange(lr.files_per_iteration):
process_file.delay(lr.pk,i)
#celery.task(rate=1/m,queue='process_file') # processing each file
def process_file(lr_pk,i):
# do somthing with i
lr = LongRunningClass.objects.get(pk=lr_pk)
lr.processed_files += 1
lr.save()
You have to set up celerybeat, and create separate queue for this types of tasks, to implement this solution. But as a result you will have a lot of control over your program - speed rates, parallel execution and your code would not hang for sleep(1). If you create another model for each file you could control what files are processed and what are not, handle errors etc,etc.
Take a look at celery.contrib.abortable -- this is an alternate base class for Celery tasks that implements a signal between caller and task to handle terminations, that could also be used to implement a "pause".
When caller calls abort(), a status is marked in the backend. Task calls self.is_aborted() to see if that special status has been set; and then implements whatever action is appropriate (terminate, pause, ignore etc.). The action is under the task's control; this is not automated task termination.
This could be used as-is if it is sensible for the specific task to interpret the ABORT signal as a request for a pause. Or you could extend the class to provide more signals, not just the existing ABORT.

"select" on multiple Python multiprocessing Queues?

What's the best way to wait (without spinning) until something is available in either one of two (multiprocessing) Queues, where both reside on the same system?
Actually you can use multiprocessing.Queue objects in select.select. i.e.
que = multiprocessing.Queue()
(input,[],[]) = select.select([que._reader],[],[])
would select que only if it is ready to be read from.
No documentation about it though. I was reading the source code of the multiprocessing.queue library (at linux it's usually sth like /usr/lib/python2.6/multiprocessing/queue.py) to find it out.
With Queue.Queue I didn't have found any smart way to do this (and I would really love to).
It doesn't look like there's an official way to handle this yet. Or at least, not based on this:
http://bugs.python.org/issue3831
You could try something like what this post is doing -- accessing the underlying pipe filehandles:
http://haltcondition.net/?p=2319
and then use select.
Not sure how well the select on a multiprocessing queue works on windows. As select on windows listens for sockets and not file handles, I suspect there could be problems.
My answer is to make a thread to listen to each queue in a blocking fashion, and to put the results all into a single queue listened to by the main thread, essentially multiplexing the individual queues into a single one.
My code for doing this is:
"""
Allow multiple queues to be waited upon.
queue,value = multiq.select(list_of_queues)
"""
import queue
import threading
class queue_reader(threading.Thread):
def __init__(self,inq,sharedq):
threading.Thread.__init__(self)
self.inq = inq
self.sharedq = sharedq
def run(self):
while True:
data = self.inq.get()
print ("thread reads data=",data)
result = (self.inq,data)
self.sharedq.put(result)
class multi_queue(queue.Queue):
def __init__(self,list_of_queues):
queue.Queue.__init__(self)
for q in list_of_queues:
qr = queue_reader(q,self)
qr.start()
def select(list_of_queues):
outq = queue.Queue()
for q in list_of_queues:
qr = queue_reader(q,outq)
qr.start()
return outq.get()
The following test routine shows how to use it:
import multiq
import queue
q1 = queue.Queue()
q2 = queue.Queue()
q3 = multiq.multi_queue([q1,q2])
q1.put(1)
q2.put(2)
q1.put(3)
q1.put(4)
res=0
while not res==4:
while not q3.empty():
res = q3.get()[1]
print ("returning result =",res)
Hope this helps.
Tony Wallace
Seems like using threads which forward incoming items to a single Queue which you then wait on is a practical choice when using multiprocessing in a platform independent manner.
Avoiding the threads requires either handling low-level pipes/FDs which is both platform specific and not easy to handle consistently with the higher-level API.
Or you would need Queues with the ability to set callbacks which i think are the proper higher level interface to go for. I.e. you would write something like:
singlequeue = Queue()
incoming_queue1.setcallback(singlequeue.put)
incoming_queue2.setcallback(singlequeue.put)
...
singlequeue.get()
Maybe the multiprocessing package could grow this API but it's not there yet. The concept works well with py.execnet which uses the term "channel" instead of "queues", see here http://tinyurl.com/nmtr4w
As of Python 3.3 you can use multiprocessing.connection.wait to wait on multiple Queue._reader objects at once.
You could use something like the Observer pattern, wherein Queue subscribers are notified of state changes.
In this case, you could have your worker thread designated as a listener on each queue, and whenever it receives a ready signal, it can work on the new item, otherwise sleep.
New version of above code...
Not sure how well the select on a multiprocessing queue works on windows. As select on windows listens for sockets and not file handles, I suspect there could be problems.
My answer is to make a thread to listen to each queue in a blocking fashion, and to put the results all into a single queue listened to by the main thread, essentially multiplexing the individual queues into a single one.
My code for doing this is:
"""
Allow multiple queues to be waited upon.
An EndOfQueueMarker marks a queue as
"all data sent on this queue".
When this marker has been accessed on
all input threads, this marker is returned
by the multi_queue.
"""
import queue
import threading
class EndOfQueueMarker:
def __str___(self):
return "End of data marker"
pass
class queue_reader(threading.Thread):
def __init__(self,inq,sharedq):
threading.Thread.__init__(self)
self.inq = inq
self.sharedq = sharedq
def run(self):
q_run = True
while q_run:
data = self.inq.get()
result = (self.inq,data)
self.sharedq.put(result)
if data is EndOfQueueMarker:
q_run = False
class multi_queue(queue.Queue):
def __init__(self,list_of_queues):
queue.Queue.__init__(self)
self.qList = list_of_queues
self.qrList = []
for q in list_of_queues:
qr = queue_reader(q,self)
qr.start()
self.qrList.append(qr)
def get(self,blocking=True,timeout=None):
res = []
while len(res)==0:
if len(self.qList)==0:
res = (self,EndOfQueueMarker)
else:
res = queue.Queue.get(self,blocking,timeout)
if res[1] is EndOfQueueMarker:
self.qList.remove(res[0])
res = []
return res
def join(self):
for qr in self.qrList:
qr.join()
def select(list_of_queues):
outq = queue.Queue()
for q in list_of_queues:
qr = queue_reader(q,outq)
qr.start()
return outq.get()
The follow code is my test routine to show how it works:
import multiq
import queue
q1 = queue.Queue()
q2 = queue.Queue()
q3 = multiq.multi_queue([q1,q2])
q1.put(1)
q2.put(2)
q1.put(3)
q1.put(4)
q1.put(multiq.EndOfQueueMarker)
q2.put(multiq.EndOfQueueMarker)
res=0
have_data = True
while have_data:
res = q3.get()[1]
print ("returning result =",res)
have_data = not(res==multiq.EndOfQueueMarker)
The one situation where I'm usually tempted to multiplex multiple queues is when each queue corresponds to a different type of message that requires a different handler. You can't just pull from one queue because if it isn't the type of message you want, you need to put it back.
However, in this case, each handler is essentially a separate consumer, which makes it an a multi-producer, multi-consumer problem. Fortunately, even in this case you still don't need to block on multiple queues. You can create different thread/process for each handler, with each handler having its own queue. Basically, you can just break it into multiple instances of a multi-producer, single-consumer problem.
The only situation I can think of where you would have to wait on multiple queues is if you were forced to put multiple handlers in the same thread/process. In that case, I would restructure it by creating a queue for my main thread, spawning a thread for each handler, and have the handlers communicate with the main thread using the main queue. Each handler could then have a separate queue for its unique type of message.
Don't do it.
Put a header on the messages and send them to a common queue. This simplifies the code and will be cleaner overall.

Categories

Resources