Shared pool map between processes with object-oriented python - python

(python2.7)
I'm trying to do a kind of scanner, that has to walk through CFG nodes, and split in different processes on branching for parallelism purpose.
The scanner is represented by an object of class Scanner. This class has one method traverse that walks through the said graph and splits if necessary.
Here how it looks:
class Scanner(object):
def __init__(self, atrb1, ...):
self.attribute1 = atrb1
self.process_pool = Pool(processes=4)
def traverse(self, ...):
[...]
if branch:
self.process_pool.map(my_func, todo_list).
My problem is the following:
How do I create a instance of multiprocessing.Pool, that is shared between all of my processes ? I want it to be shared, because since a path can be splitted again, I do not want to end with a kind of fork bomb, and having the same Pool will help me to limit the number of processes running at the same time.
The above code does not work, since Pool can not be pickled. In consequence, I have tried that:
class Scanner(object):
def __getstate__(self):
self_dict = self.__dict__.copy()
def self_dict['process_pool']
return self_dict
[...]
But obviously, it results in having self.process_pool not defined in the created processes.
Then, I tried to create a Pool as a module attribute:
process_pool = Pool(processes=4)
def my_func(x):
[...]
class Scanner(object):
def __init__(self, atrb1, ...):
self.attribute1 = atrb1
def traverse(self, ...):
[...]
if branch:
process_pool.map(my_func, todo_list)
It does not work, and this answer explains why.
But here comes the thing, wherever I create my Pool, something is missing. If I create this Pool at the end of my file, it does not see self.attribute1, the same way it did not see answer and fails with an AttributeError.
I'm not even trying to share it yet, and I'm already stuck with Multiprocessing way of doing thing.
I don't know if I have not been thinking correctly the whole thing, but I can not believe it's so complicated to handle something as simple as "having a worker pool and giving them tasks".
Thank you,
EDIT:
I resolved my first problem (AttributeError), my class had a callback as its attribute, and this callback was defined in the main script file, after the import of the scanner module... But the concurrency and "do not fork bomb" thing is still a problem.

What you want to do can't be done safely. Think about if you somehow had a single shared Pool shared across parent and worker processes, with, say, two worker processes. The parent runs a map that tries to perform two tasks, and each task needs to map two more tasks. The two parent dispatched tasks go to each worker, and the parent blocks. Each worker sends two more tasks to the shared pool and blocks for them to complete. But now all workers are now occupied, waiting for a worker to become free; you've deadlocked.
A safer approach would be to have the workers return enough information to dispatch additional tasks in the parent. Then you could do something like:
class MoreWork(object):
def __init__(self, func, *args):
self.func = func
self.args = args
pool = multiprocessing.Pool()
try:
base_task = somefunc, someargs
outstanding = collections.deque([pool.apply_async(*base_task)])
while outstanding:
result = outstanding.popleft().get()
if isinstance(result, MoreWork):
outstanding.append(pool.apply_async(result.func, result.args))
else:
... do something with a "final" result, maybe breaking the loop ...
finally:
pool.terminate()
What the functions are is up to you, they'd just return information in a MoreWork when there was more to do, not launch a task directly. The point is to ensure that by having the parent be solely responsible for task dispatch, and the workers solely responsible for task completion, you can't deadlock due to all workers being blocked waiting for tasks that are in the queue, but not being processed.
This is also not at all optimized; ideally, you wouldn't block waiting on the first item in the queue if other items in the queue were complete; it's a lot easier to do this with the concurrent.futures module, specifically with concurrent.futures.wait to wait on the first available result from an arbitrary number of outstanding tasks, but you'd need a third party PyPI package to get concurrent.futures on Python 2.7.

Related

How to fork and join multiple subprocesses with a global timeout in Python?

I want to execute some tasks in parallel in multiple subprocesses and time out if the tasks were not completed within some delay.
A first approach consists in forking and joining the subprocesses individually with remaining timeouts computed with respect to the global timeout, like suggested in this answer. It works fine for me.
A second approach, which I want to use here, consists in creating a pool of subprocesses and waiting with the global timeout, like suggested in this answer.
However I have a problem with the second approach: after feeding the pool of subprocesses with tasks that have multiprocessing.Event() objects, waiting for their completion raises this exception:
RuntimeError: Condition objects should only be shared between processes through inheritance
Here is the Python code snippet:
import multiprocessing.pool
import time
class Worker:
def __init__(self):
self.event = multiprocessing.Event() # commenting this removes the RuntimeError
def work(self, x):
time.sleep(1)
return x * 10
if __name__ == "__main__":
pool_size = 2
timeout = 5
with multiprocessing.pool.Pool(pool_size) as pool:
result = pool.map_async(Worker().work, [4, 5, 2, 7])
print(result.get(timeout)) # raises the RuntimeError
In the "Programming guidlines" section of the multiprocessing — Process-based parallelism documentation, there is this paragraph:
Better to inherit than pickle/unpickle
When using the spawn or forkserver start methods many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.
So multiprocessing.Event() caused a RuntimeError because it is not pickable, as demonstrated by the following Python code snippet:
import multiprocessing
import pickle
pickle.dumps(multiprocessing.Event())
which raises the same exception:
RuntimeError: Condition objects should only be shared between processes through inheritance
A solution is to use a proxy object:
A proxy is an object which refers to a shared object which lives (presumably) in a different process.
because:
An important feature of proxy objects is that they are picklable so they can be passed between processes.
multiprocessing.Manager().Event() creates a shared threading.Event() object and returns a proxy for it, so replacing this line:
self.event = multiprocessing.Event()
by the following line in the Python code snippet of the question solves the problem:
self.event = multiprocessing.Manager().Event()

multiprocessing's Queue inside Manger.Namespace()

I am currently creating a class which is supposed to execute some methods in a multi-threaded way, using the multiprocessing module. I execute the real computation using a Pool of n workers. Now I wanted to assign each of the currently n active workers an index between 0 and n for some other calculation. To do this, I wanted to use a shared Queue to assign an index in a way, that at every time no two workers have the same id. To share the same Queue inside the class between the different threads, I wanted to store it inside a Manager.Namespace(). But doing this, I got some problems with the Queue. Therefore, I created a minimal version of my problem and ended up with something like this:
from multiprocess import Process, Queue, Manager, Pool, cpu_count
class A(object):
def __init__(self):
manager = Manager()
self.ns = manager.Namespace()
self.ns.q = manager.Queue()
def foo(self):
for i in range(10):
print(i)
self.ns.q.put(i)
print(self.ns.q.get())
print(self.ns.q.qsize())
a = A()
a.foo()
In this code, the execution stops before the second print statement - therefore, I think, that no data is actually written in the Queue. When I remove the namespace related stuff the code works flawlessly. Is this the intended behaviour of the multiprocessings objects and am I doing something wrong? Or is this some kind of bug?
yes, you should not use Namespace here. when you put a Queue object into manager.Namespace(), each process will get a new Queue instance, all the writer/reader of those newly created queue objects have no connection with parent process, therefore no message will be received by worker processes. share a Queue solely instead.
by the way, you mentioned "thread" many times, but in the context of multiprocess module, a worker is a process, not a thread.

Thread blocks in an RLock

I have this implementation:
def mlock(f):
'''Method lock. Uses a class lock to execute the method'''
def wrapper(self, *args, **kwargs):
with self._lock:
res = f(self, *args, **kwargs)
return res
return wrapper
class Lockable(object):
def __init__(self):
self._lock = threading.RLock()
Which I use in several places, for example:
class Fifo(Lockable):
'''Implementation of a Fifo. It will grow until the given maxsize; then it will drop the head to add new elements'''
def __init__(self, maxsize, name='FIFO', data=None, inserted=0, dropped=0):
self.maxsize = maxsize
self.name = name
self.inserted = inserted
self.dropped = dropped
self._fifo = []
self._cnt = None
Lockable.__init__(self)
if data:
for d in data:
self.put(d)
#mlock
def __len__(self):
length = len(self._fifo)
return length
...
The application is quite complex, but it works well. Just to make sure, I have been doing stress tests of the running service, and I find that it sometimes (rarely) deadlocks in the mlock. I assume another thread is holding the lock and not releasing it. How can I debug this? Please note that:
it is very difficult to reproduce: I need hours of testing to deadlock
the application is running in the background
once it deadlocks, I can not interact with it anymore
I would like to know:
what thread is holding the lock?
why is it not being released? I am using a context manager to acquire the lock, so it should always be released. Where is the bug?!
What options do I have to further debug this?
I have been checking if there is any way of knowing what thread is holding an RLock, but it seems there is not API for this.
I don't think there's an easy solution for this, but it can be done with some work.
Personally, I've found the following useful (albeit in C++).
Start by creating a Lockable base that uses tracks threads' interactions with it. A Lockable object will use an additional (non-recursive) lock for protecting a dictionary mapping thread ids to interactions with it:
When a thread tries to lock, it (locks and) creates an entry.
When it acquires the lock, it (locks and) modifies the entry.
When it releases the lock, it (locks and) removes the entry.
Additionally, a Lockable object will have a low-priority thread, waking up very rarely (once every several minutes), and seeing if there's indication of a deadlock (approximated by the event that a thread has been holding the lock for a long time, while at least one other thread has waited for it).
The entry for a thread should therefore include:
the operation's time
the stacktrace info leading to the operation.
The problem is that this can alter the relative timing of threads, which might cause your program to go into different execution paths than it normally does.
Here you need to get creative. You might need to also induce (random) time lapses in these (and possibly other) operations.

Multiprocesses python with shared memory

I have an object that connects to a websocket remote server. I need to do a parallel process at the same time. However, I don't want to create a new connection to the server. Since threads are the easier way to do this, this is what I have been using so far. However, I have been getting a huge latency because of GIL. Can I achieve the same thing as threads but with multiprocesses in parallel?
This is the code that I have:
class WebSocketApp(object):
def on_open(self):
# Create another thread to make sure the commands are always been read
print "Creating thread..."
try:
thread.start_new_thread( self.read_commands,() )
except:
print "Error: Unable to start thread"
Is there an equivalent way to do this with multiprocesses?
Thanks!
The direct equivalent is
import multiprocessing
class WebSocketApp(object):
def on_open(self):
# Create another process to make sure the commands are always been read
print "Creating process..."
try:
multiprocessing.Process(target=self.read_commands,).start()
except:
print "Error: Unable to start process"
However, this doesn't address the "shared memory" aspect, which has to be handled a little differently than it is with threads, where you can just use global variables. You haven't really specified what objects you need to share between processes, so it's hard to say exactly what approach you should take. The multiprocessing documentation does cover ways to deal with shared state, however. Do note that in general it's better to avoid shared state if possible, and just explicitly pass state between the processes, either as an argument to the Process constructor or via a something like a Queue.
You sure can, use something along the lines of:
from multiprocessing import Process
class WebSocketApp(object):
def on_open(self):
# Create another thread to make sure the commands are always been read
print "Creating thread..."
try:
p = Process(target = WebSocketApp.read_commands, args = (self, )) # Add other arguments to this tuple
p.start()
except:
print "Error: Unable to start thread"
It is important to note, however, that as soon as the object is sent to the other process the two objects self and self in the different threads diverge and represent different objects. If you wish to communicate you will need to use something like the included Queue or Pipe in the multiprocessing module.
You may need to keep a reference of all the processes (p in this case) in your main thread in order to be able to communicate that your program is terminating (As a still-running child process will appear to hang the parent when it dies), but that depends on the nature of your program.
If you wish to keep the object the same, you can do one of a few things:
Make all of your object properties either single values or arrays and then do something similar to this:
from multiprocessing import Process, Value, Array
class WebSocketApp(object):
def __init__(self):
self.my_value = Value('d', 0.3)
self.my_array = Array('i', [4 10 4])
# -- Snip --
And then these values should work as shared memory. The types are very restrictive though (You must specify their types)
A different answer is to use a manager:
from multiprocessing import Process, Manager
class WebSocketApp(object):
def __init__(self):
self.my_manager = Manager()
self.my_list = self.my_manager.list()
self.my_dict = self.my_manager.dict()
# -- Snip --
And then self.my_list and self.my_dict act as a shared-memory list and dictionary respectively.
However, the types for both of these approaches can be restrictive so you may have to roll your own technique with a Queue and a Semaphore. But it depends what you're doing.
Check out the multiprocessing documentation for more information.

Python, non-blocking threads

There are a lot of tutorials etc. on Python and asynchronous coding techniques, but I am having difficulty filtering the through results to find what I need. I am new to Python, so that doesn't help.
Setup
I currently have two objects that look sort of like this (please excuse my python formatting):
class Alphabet(parent):
def init(self, item):
self.item = item
def style_alphabet(callback):
# this method presumably takes a very long time, and fills out some properties
# of the Alphabet object
callback()
class myobj(another_parent):
def init(self):
self.alphabets = []
refresh()
def foo(self):
for item in ['a', 'b', 'c']:
letters = new Alphabet(item)
self.alphabets.append(letters)
self.screen_refresh()
for item in self.alphabets
# this is the code that I want to run asynchronously. Typically, my efforts
# all involve passing item.style_alphabet to the async object / method
# and either calling start() here or in Alphabet
item.style_alphabet(self.screen_refresh)
def refresh(self):
foo()
# redraw screen, using the refreshed alphabets
redraw_screen()
def screen_refresh(self):
# a lighter version of refresh()
redraw_screen()
The idea is that the main thread initially draws the screen with incomplete Alphabet objects, fills out the Alphabet objects, updating the screen as they complete.
I've tried a number of implementations of threading.Tread, Queue.Queue, and even futures, and for some reason they either haven't worked, or they have blocked the main thread. so that the initial draw doesn't take place.
A few of the async methods I've attempted:
class Async (threading.Thread):
def __init__(self, f, cb):
threading.Thread.__init__(self)
self.f = f
self.cb = cb
def run(self):
self.f()
self.cb()
def run_as_thread(f):
# When I tried this method, I assigned the callback to a property of "Alphabet"
thr = threading.Thread(target=f)
thr.start()
def run_async(f, cb):
pool = Pool(processes=1)
result = pool.apply_async(func=f, args=args, callback=cb)
I ended up writing a thread pool to deal with this use pattern. Try creating a queue and handing a reference off to all the worker threads. Add task objects to the queue from the main thread. Worker threads pull objects from the queue and invoke the functions. Add an event to each task to be signaled on the worker thread at task completion. Keep a list of task objects on the main thread and use polling to see if the UI needs an update. One can get fancy and add a pointer to a callback function on the task objects if needed.
My solution was inspired by what I found on Google: http://code.activestate.com/recipes/577187-python-thread-pool/
I kept improving on that design to add features and give the threading, multiprocessing, and parallel python modules a consistent interface. My implementation is at:
https://github.com/nornir/nornir-pools
Docs:
http://nornir.github.io/packages/nornir_pools.html
If you are new to Python and not familiar with the GIL I suggest doing a search for Python threading and the global interpreter lock (GIL). It isn’t a happy story. Generally I find I need to use the multiprocessing module to get decent performance.
Hope some of this helps.

Categories

Resources