Python, non-blocking threads - python

There are a lot of tutorials etc. on Python and asynchronous coding techniques, but I am having difficulty filtering the through results to find what I need. I am new to Python, so that doesn't help.
Setup
I currently have two objects that look sort of like this (please excuse my python formatting):
class Alphabet(parent):
def init(self, item):
self.item = item
def style_alphabet(callback):
# this method presumably takes a very long time, and fills out some properties
# of the Alphabet object
callback()
class myobj(another_parent):
def init(self):
self.alphabets = []
refresh()
def foo(self):
for item in ['a', 'b', 'c']:
letters = new Alphabet(item)
self.alphabets.append(letters)
self.screen_refresh()
for item in self.alphabets
# this is the code that I want to run asynchronously. Typically, my efforts
# all involve passing item.style_alphabet to the async object / method
# and either calling start() here or in Alphabet
item.style_alphabet(self.screen_refresh)
def refresh(self):
foo()
# redraw screen, using the refreshed alphabets
redraw_screen()
def screen_refresh(self):
# a lighter version of refresh()
redraw_screen()
The idea is that the main thread initially draws the screen with incomplete Alphabet objects, fills out the Alphabet objects, updating the screen as they complete.
I've tried a number of implementations of threading.Tread, Queue.Queue, and even futures, and for some reason they either haven't worked, or they have blocked the main thread. so that the initial draw doesn't take place.
A few of the async methods I've attempted:
class Async (threading.Thread):
def __init__(self, f, cb):
threading.Thread.__init__(self)
self.f = f
self.cb = cb
def run(self):
self.f()
self.cb()
def run_as_thread(f):
# When I tried this method, I assigned the callback to a property of "Alphabet"
thr = threading.Thread(target=f)
thr.start()
def run_async(f, cb):
pool = Pool(processes=1)
result = pool.apply_async(func=f, args=args, callback=cb)

I ended up writing a thread pool to deal with this use pattern. Try creating a queue and handing a reference off to all the worker threads. Add task objects to the queue from the main thread. Worker threads pull objects from the queue and invoke the functions. Add an event to each task to be signaled on the worker thread at task completion. Keep a list of task objects on the main thread and use polling to see if the UI needs an update. One can get fancy and add a pointer to a callback function on the task objects if needed.
My solution was inspired by what I found on Google: http://code.activestate.com/recipes/577187-python-thread-pool/
I kept improving on that design to add features and give the threading, multiprocessing, and parallel python modules a consistent interface. My implementation is at:
https://github.com/nornir/nornir-pools
Docs:
http://nornir.github.io/packages/nornir_pools.html
If you are new to Python and not familiar with the GIL I suggest doing a search for Python threading and the global interpreter lock (GIL). It isn’t a happy story. Generally I find I need to use the multiprocessing module to get decent performance.
Hope some of this helps.

Related

Shared pool map between processes with object-oriented python

(python2.7)
I'm trying to do a kind of scanner, that has to walk through CFG nodes, and split in different processes on branching for parallelism purpose.
The scanner is represented by an object of class Scanner. This class has one method traverse that walks through the said graph and splits if necessary.
Here how it looks:
class Scanner(object):
def __init__(self, atrb1, ...):
self.attribute1 = atrb1
self.process_pool = Pool(processes=4)
def traverse(self, ...):
[...]
if branch:
self.process_pool.map(my_func, todo_list).
My problem is the following:
How do I create a instance of multiprocessing.Pool, that is shared between all of my processes ? I want it to be shared, because since a path can be splitted again, I do not want to end with a kind of fork bomb, and having the same Pool will help me to limit the number of processes running at the same time.
The above code does not work, since Pool can not be pickled. In consequence, I have tried that:
class Scanner(object):
def __getstate__(self):
self_dict = self.__dict__.copy()
def self_dict['process_pool']
return self_dict
[...]
But obviously, it results in having self.process_pool not defined in the created processes.
Then, I tried to create a Pool as a module attribute:
process_pool = Pool(processes=4)
def my_func(x):
[...]
class Scanner(object):
def __init__(self, atrb1, ...):
self.attribute1 = atrb1
def traverse(self, ...):
[...]
if branch:
process_pool.map(my_func, todo_list)
It does not work, and this answer explains why.
But here comes the thing, wherever I create my Pool, something is missing. If I create this Pool at the end of my file, it does not see self.attribute1, the same way it did not see answer and fails with an AttributeError.
I'm not even trying to share it yet, and I'm already stuck with Multiprocessing way of doing thing.
I don't know if I have not been thinking correctly the whole thing, but I can not believe it's so complicated to handle something as simple as "having a worker pool and giving them tasks".
Thank you,
EDIT:
I resolved my first problem (AttributeError), my class had a callback as its attribute, and this callback was defined in the main script file, after the import of the scanner module... But the concurrency and "do not fork bomb" thing is still a problem.
What you want to do can't be done safely. Think about if you somehow had a single shared Pool shared across parent and worker processes, with, say, two worker processes. The parent runs a map that tries to perform two tasks, and each task needs to map two more tasks. The two parent dispatched tasks go to each worker, and the parent blocks. Each worker sends two more tasks to the shared pool and blocks for them to complete. But now all workers are now occupied, waiting for a worker to become free; you've deadlocked.
A safer approach would be to have the workers return enough information to dispatch additional tasks in the parent. Then you could do something like:
class MoreWork(object):
def __init__(self, func, *args):
self.func = func
self.args = args
pool = multiprocessing.Pool()
try:
base_task = somefunc, someargs
outstanding = collections.deque([pool.apply_async(*base_task)])
while outstanding:
result = outstanding.popleft().get()
if isinstance(result, MoreWork):
outstanding.append(pool.apply_async(result.func, result.args))
else:
... do something with a "final" result, maybe breaking the loop ...
finally:
pool.terminate()
What the functions are is up to you, they'd just return information in a MoreWork when there was more to do, not launch a task directly. The point is to ensure that by having the parent be solely responsible for task dispatch, and the workers solely responsible for task completion, you can't deadlock due to all workers being blocked waiting for tasks that are in the queue, but not being processed.
This is also not at all optimized; ideally, you wouldn't block waiting on the first item in the queue if other items in the queue were complete; it's a lot easier to do this with the concurrent.futures module, specifically with concurrent.futures.wait to wait on the first available result from an arbitrary number of outstanding tasks, but you'd need a third party PyPI package to get concurrent.futures on Python 2.7.

Thread blocks in an RLock

I have this implementation:
def mlock(f):
'''Method lock. Uses a class lock to execute the method'''
def wrapper(self, *args, **kwargs):
with self._lock:
res = f(self, *args, **kwargs)
return res
return wrapper
class Lockable(object):
def __init__(self):
self._lock = threading.RLock()
Which I use in several places, for example:
class Fifo(Lockable):
'''Implementation of a Fifo. It will grow until the given maxsize; then it will drop the head to add new elements'''
def __init__(self, maxsize, name='FIFO', data=None, inserted=0, dropped=0):
self.maxsize = maxsize
self.name = name
self.inserted = inserted
self.dropped = dropped
self._fifo = []
self._cnt = None
Lockable.__init__(self)
if data:
for d in data:
self.put(d)
#mlock
def __len__(self):
length = len(self._fifo)
return length
...
The application is quite complex, but it works well. Just to make sure, I have been doing stress tests of the running service, and I find that it sometimes (rarely) deadlocks in the mlock. I assume another thread is holding the lock and not releasing it. How can I debug this? Please note that:
it is very difficult to reproduce: I need hours of testing to deadlock
the application is running in the background
once it deadlocks, I can not interact with it anymore
I would like to know:
what thread is holding the lock?
why is it not being released? I am using a context manager to acquire the lock, so it should always be released. Where is the bug?!
What options do I have to further debug this?
I have been checking if there is any way of knowing what thread is holding an RLock, but it seems there is not API for this.
I don't think there's an easy solution for this, but it can be done with some work.
Personally, I've found the following useful (albeit in C++).
Start by creating a Lockable base that uses tracks threads' interactions with it. A Lockable object will use an additional (non-recursive) lock for protecting a dictionary mapping thread ids to interactions with it:
When a thread tries to lock, it (locks and) creates an entry.
When it acquires the lock, it (locks and) modifies the entry.
When it releases the lock, it (locks and) removes the entry.
Additionally, a Lockable object will have a low-priority thread, waking up very rarely (once every several minutes), and seeing if there's indication of a deadlock (approximated by the event that a thread has been holding the lock for a long time, while at least one other thread has waited for it).
The entry for a thread should therefore include:
the operation's time
the stacktrace info leading to the operation.
The problem is that this can alter the relative timing of threads, which might cause your program to go into different execution paths than it normally does.
Here you need to get creative. You might need to also induce (random) time lapses in these (and possibly other) operations.

Multiple python loops in same process

I have a project that I'm writing in Python that will be sending hardware (Phidgets) commands. Because I'll be interfacing with more than one hardware component, I need to have more than one loop running concurrently.
I've researched the Python multiprocessing module, but it turns out that the hardware can only be controlled by one process at a time, so all my loops need to run in the same process.
As of right now, I've been able to accomplish my task with a Tk() loop, but without actually using any of the GUI tools. For example:
from Tk import tk
class hardwareCommand:
def __init__(self):
# Define Tk object
self.root = tk()
# open the hardware, set up self. variables, call the other functions
self.hardwareLoop()
self.UDPListenLoop()
self.eventListenLoop()
# start the Tk loop
self.root.mainloop()
def hardwareLoop(self):
# Timed processing with the hardware
setHardwareState(self.state)
self.root.after(100,self.hardwareLoop)
def UDPListenLoop(self):
# Listen for commands from UDP, call appropriate functions
self.state = updateState(self.state)
self.root.after(2000,self.UDPListenLoop)
def eventListenLoop(self,event):
if event == importantEvent:
self.state = updateState(self.event.state)
self.root.after(2000,self.eventListenLoop)
hardwareCommand()
So basically, the only reason for defining the Tk() loop is so that I can call the root.after() command within those functions that need to be concurrently looped.
This works, but is there a better / more pythonic way of doing it? I'm also wondering if this method causes unnecessary computational overhead (I'm not a computer science guy).
Thanks!
The multiprocessing module is geared towards having multiple separate processes. Although you can use Tk's event loop, that is unnecessary if you don't have a Tk based GUI, so if you just want multiple tasks to execute in the same process you can use the Thread module. With it you can create specific classes which encapsulate a separate thread of execution, so you can have many "loops" executing simultaneously in the background. Think of something like this:
from threading import Thread
class hardwareTasks(Thread):
def hardwareSpecificFunction(self):
"""
Example hardware specific task
"""
#do something useful
return
def run(self):
"""
Loop running hardware tasks
"""
while True:
#do something
hardwareSpecificTask()
class eventListen(Thread):
def eventHandlingSpecificFunction(self):
"""
Example event handling specific task
"""
#do something useful
return
def run(self):
"""
Loop treating events
"""
while True:
#do something
eventHandlingSpecificFunction()
if __name__ == '__main__':
# Instantiate specific classes
hw_tasks = hardwareTasks()
event_tasks = eventListen()
# This will start each specific loop in the background (the 'run' method)
hw_tasks.start()
event_tasks.start()
while True:
#do something (main loop)
You should check this article to get more familiar with the threading module. Its documentation is a good read too, so you can explore its full potential.

A couple of questions about calling methods outside of QThread from within the QThread - is my design flawed?

I have an application that has a GUI thread and many different worker threads. In this application, I have a functions.py module, which contains a lot of different "utility" functions that are used all over the application.
Yesterday the application has been released and some users (a minority, but still) has reported problems with the application crashing. I looked over my code and noticed a possible design flaw, and would like to check with the lovely people of SO and see if I am right and if this is indeed a flaw.
Suppose I have this defined in my functions.py module:
class Functions:
solveComputationSignal = Signal(str)
updateStatusSignal = Signal(int, str)
text = None
#classmethod
def setResultText(self, text):
self.text = text
#classmethod
def solveComputation(cls, platform, computation, param=None):
#Not the entirety of the method is listed here
result = urllib.urlopen(COMPUTATION_URL).read()
if param is None:
cls.solveComputationSignal.emit(result)
else:
cls.solveAlternateComputation(platform, computation)
while not self.text:
time.sleep(3)
return self.text if self.text else False
#classmethod
def updateCurrentStatus(cls, platform, statusText):
cls.updateStatusSignal.emit(platform, statusText)
I think these methods in themselves are fine. The two signals defined here are connected to in the GUI thread. The first signal pops-up a dialog in which the computation is presented. The GUI thread calls the setResultText() method and sets the resulting string as entered by the user (if anyone knows of a better way to wait until the user has inputted the text other than sleeping and waiting for self.text to become True, please let me know). The solveAlternateComputation is another method in the same class that solves the computation automatically, however, it too calls the setResultText() method that sets the resulting text.
The second signal updates the statusBar text of the main GUI as well.
What's worse is that I think the above design, while perhaps flawed, is not the problem.
The problem lies, I believe, in the way I call these methods, whihch is from the worker threads (note that I have multiple similar workers, all of which are different "platforms")
Assume I have this (and I do):
class WorkerPlatform1(QThread):
#Init and other methods are here
def run(self):
#Thread does its job here, but then when it needs to present the
#computation, instead of emitting a signal, this is what I do
self.f = functions.Functions
result = self.f.solveComputation(platform, computation)
if result:
#Go on with the task
else:
self.f.updateCurrentStatus(platform, "Error grabbing computation!")
In this case I think that my flaw is that the thread itself is not emitting any signals, but rather calling callables residing outside of that thread directly. Am I right in thinking that this could cause my application to crash? Although the faulty module is reported as QtGui4.dll
One more thing: both of these methods in the Functions class are accessed by many threads almost simultaneously. Is this even advisable - have methods residing outside of a thread be accessed by many threads all at the same time? Can it so happen that I "confuse" my program? The reason I am asking is because people who say that the application is not crashing report that, very often, the solveComputation() returns the incorrect text - not all the time, but very often. Since that COMPUTATION_URL's server can take some time to respond (even 10+ seconds), is it possible that, once a thread calls that method, while the urllib library is still waiting for server response, in that time another thread can call it, causing it to use a different COMPUTATION_URL, which will result in it returning an incorrect value on some cases?
Finally, I am thinking of solutions: for my first (crashing) problem, do you think the proper solution would be to directly emit a Signal from the thread itself, and then connect it in the GUI thread? Is that the right way to go about it?
Secondly, for the solveComputation returning incorrect values, would I solve it by moving that method (and accompanying methods) to every Worker class? then I could call them directly and hopefully have the correct response - or, dozens of different responses (since I have that many threads) - for every thread?
Thank you all and I apologize for the wall of text.
EDIT: I would like to add that when running in console with some users, this error appears QObject: Cannot create children for a parent that is in a different thread.
(Parent is QLabel(0x4795500), parent's thread is QThread(0x2d3fd90), current thread is WordpressCreator(0x49f0548)
Your design is flawed if you really are using your Functions class like this with classmethods storing results on class attributes, being shared amongst multiple workers. It should be using all instance methods, and each thread should be using an instance of this class:
class Functions(QObject):
solveComputationSignal = pyqtSignal(str)
updateStatusSignal = pyqtSignal(int, str)
def __init__(self, parent=None):
super(Functions, self).__init__(parent)
self.text = ""
def setResultText(self, text):
self.text = text
def solveComputation(self, platform, computation, param=None):
result = urllib.urlopen(COMPUTATION_URL).read()
if param is None:
self.solveComputationSignal.emit(result)
else:
self.solveAlternateComputation(platform, computation)
while not self.text:
time.sleep(3)
return self.text if self.text else False
def updateCurrentStatus(self, platform, statusText):
self.updateStatusSignal.emit(platform, statusText)
# worker_A
def run(self):
...
f = Functions()
# worker_B
def run(self):
...
f = Functions()
Also, for doing your urlopen, instead of doing sleeps to check for when it is ready, you can make use of the QNetworkAccessManager to make your requests and use signals to be notified when results are ready.

Waiting on event with Twisted and PB

I have a python app that uses multiple threads and I am curious about the best way to wait for something in python without burning cpu or locking the GIL.
my app uses twisted and I spawn a thread to run a long operation so I do not stomp on the reactor thread. This long operation also spawns some threads using twisted's deferToThread to do something else, and the original thread wants to wait for the results from the defereds.
What I have been doing is this
while self._waiting:
time.sleep( 0.01 )
which seemed to disrupt twisted PB's objects from receiving messages so I thought sleep was locking the GIL. Further investigation by the posters below revealed however that it does not.
There are better ways to wait on threads without blocking the reactor thread or python posted below.
If you're already using Twisted, you should never need to "wait" like this.
As you've described it:
I spawn a thread to run a long operation ... This long operation also spawns some threads using twisted's deferToThread ...
That implies that you're calling deferToThread from your "long operation" thread, not from your main thread (the one where reactor.run() is running). As Jean-Paul Calderone already noted in a comment, you can only call Twisted APIs (such as deferToThread) from the main reactor thread.
The lock-up that you're seeing is a common symptom of not following this rule. It has nothing to do with the GIL, and everything to do with the fact that you have put Twisted's reactor into a broken state.
Based on your loose description of your program, I've tried to write a sample program that does what you're talking about based entirely on Twisted APIs, spawning all threads via Twisted and controlling them all from the main reactor thread.
import time
from twisted.internet import reactor
from twisted.internet.defer import gatherResults
from twisted.internet.threads import deferToThread, blockingCallFromThread
def workReallyHard():
"'Work' function, invoked in a thread."
time.sleep(0.2)
def longOperation():
for x in range(10):
workReallyHard()
blockingCallFromThread(reactor, startShortOperation, x)
result = blockingCallFromThread(reactor, gatherResults, shortOperations)
return 'hooray', result
def shortOperation(value):
workReallyHard()
return value * 100
shortOperations = []
def startShortOperation(value):
def done(result):
print 'Short operation complete!', result
return result
shortOperations.append(
deferToThread(shortOperation, value).addCallback(done))
d = deferToThread(longOperation)
def allDone(result):
print 'Long operation complete!', result
reactor.stop()
d.addCallback(allDone)
reactor.run()
Note that at the point in allDone where the reactor is stopped, you could fire off another "long operation" and have it start the process all over again.
Have you tried condition variables? They are used like
condition = Condition()
def consumer_in_thread_A():
condition.acquire()
try:
while resource_not_yet_available:
condition.wait()
# Here, the resource is available and may be
# consumed
finally:
condition.release()
def produce_in_thread_B():
# ... create resource, whatsoever
condition.acquire()
try:
condition.notify_all()
finally:
condition.release()
Condition variables act as locks (acquire and release), but their main purpose is to provide the control mechanism which allows to wait for them to be notify-d or notify_all-d.
I recently found out that calling
time.sleep( X ) will lock the GIL for
the entire time X and therefore freeze
ALL python threads for that time
period.
You found wrongly -- this is definitely not how it works. What's the source where you found this mis-information?
Anyway, then you clarify (in comments -- better edit your Q!) that you're using deferToThread and your problem with this is that...:
Well yes I defer the action to a
thread and give twisted a callback.
But the parent thread needs to wait
for the whole series of sub threads to
complete before it can move onto a new
set of sub threads to spawn
So use as the callback a method of an object with a counter -- start it at 0, increment it by one every time you're deferring-to-thread and decrement it by one in the callback method.
When the callback method sees that the decremented counter has gone back to 0, it knows that we're done waiting "for the whole series of sub threads to complete" and then the time has come to "move on to a new set of sub threads to spawn", and thus, in that case only, calls the "spawn a new set of sub threads" function or method -- it's that easy!
E.g. (net of typos &c as this is untested code, just to give you the idea)...:
class Waiter(object):
def __init__(self, what_next, *a, **k):
self.counter = 0
self.what_next = what_next
self.a = a
self.k = k
def one_more(self):
self.counter += 1
def do_wait(self, *dont_care):
self.counter -= 1
if self.counter == 0:
self.what_next(*self.a, **self.k)
def spawn_one_thread(waiter, long_calculation, *a, **k):
waiter.one_more()
d = threads.deferToThread(long_calculation, *a, **k)
d.addCallback(waiter.do_wait)
def spawn_all(waiter, list_of_lists_of_functions_args_and_kwds):
if not list_of_lists_of_functions_args_and_kwds:
return
if waiter is None:
waiter=Waiter(spawn_all, list_of_lists_of_functions_args_and_kwds)
this_time = list_of_list_of_functions_args_and_kwds.pop(0)
for f, a, k in this_time:
spawn_one_thread(waiter, f, *a, **k)
def start_it_all(list_of_lists_of_functions_args_and_kwds):
spawn_all(None, list_of_lists_of_functions_args_and_kwds)
According to the Python source, time.sleep() does not hold the GIL.
http://code.python.org/hg/trunk/file/98e56689c59c/Modules/timemodule.c#l920
Note the use of Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS, as documented here:
http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
The threading module allows you to spawn a thread, which is then represented by a Thread object. That object has a join method that you can use to wait for the subthread to complete.
See http://docs.python.org/library/threading.html#module-threading

Categories

Resources