How to functionally compose futures? - python

I have a thread object that I can't distribute across a ProcessPoolExecutor, but would like to return a future. If I already have a future, is there a way to apply to its completed value, eg, Future a -> (a -> b) -> Future b?
import concurrent.futures
import threading
def three(x):
return 2+x
if __name__ == '__main__':
trackedItem = (3, threading.Event())
pool = concurrent.futures.ProcessPoolExecutor(3)
poolJob = (q.submit(three, trackedItem[0]),trackedItem[1]) #(Future(int), Event)
*** something magic goes here ***
#Trying to transform it into Future(int,Event)

Here's a way which uses a simpler setup code, without threading.Event as that doesn't seem necessary to solve the problem. Basically, you can create future_b as a new Future() yourself, and use the add_done_callback method on future_a to set the result of future_b. Here, func_a is the computation to compute the result of future_a, and func_b is the computation to compute the result of future_b using the result of future_a.
from concurrent.futures import ProcessPoolExecutor, Future
def func_a(x):
return 2 + x
def func_b(x):
return 10 * x
if __name__ == '__main__':
pool = ProcessPoolExecutor(3)
future_a = pool.submit(func_a, 3)
future_b = Future()
future_b.set_running_or_notify_cancel()
def callback(f):
x = f.result()
y = func_b(x)
future_b.set_result(y)
future_a.add_done_callback(callback)
print(future_b.result()) # 50
If you want a helper function to do this, you can write one: map_future takes a future and a mapping function, and returns the new mapped future as required. This version handles an exception in case f.result() or func_b throws one:
def map_future(future_a, func):
future_b = Future()
future_b.set_running_or_notify_cancel()
def callback(f):
try:
x = f.result()
y = func(x)
future_b.set_result(y)
except Exception as e:
future_b.set_exception(e)
future_a.add_done_callback(callback)
return future_b
Caveats: this goes against the advice in the documentation for the Future class, which says:
Future instances are created by Executor.submit() and should not be created directly except for testing.
Also, if you have any errors which aren't subclasses of Exception in the callback, they will be "logged and ignored" according to the docs. I've chosen to only catch Exception in this code for simplicity, but you might prefer the sys.exc_info()[0] way of catching every possible thing that could be raised.

#kaya3 provided a great answer but I ran into problem when adding exception handling for it to close the pool. You can find my example cpchung_example below to see how to compose future functionally. It still remains to add exception-handling to it that I dont have a good solution yet.
For comparison, I put them all into one file:
from concurrent.futures import ProcessPoolExecutor, Future
from concurrent.futures.thread import ThreadPoolExecutor
def map_future(future_a, func):
future_b = Future()
future_b.set_running_or_notify_cancel()
def callback(f):
try:
x = f.result()
y = func(x)
future_b.set_result(y)
except Exception as e:
future_b.set_exception(e)
future_a.add_done_callback(callback)
return future_b
def func_a(x):
return 2 + x
def func_b(x):
return 3 * x
def func_c(x):
raise NameError('Hi There')
return 4 * x
def kaya3_example():
future_a = pool.submit(func_a, 3)
future_b = Future()
future_b.set_running_or_notify_cancel()
def callback(f):
x = f.result()
y = func_b(x)
future_b.set_result(y)
future_a.add_done_callback(callback)
print(future_b.result()) # 50
def exception_handling():
try:
future_a = pool.submit(func_a, 3)
future_b = map_future(future_a, func_b)
future_c = map_future(future_b, func_c)
print(future_c.result())
except Exception as e:
pool.shutdown()
pool.shutdown()
def f(x, y):
return x * y
def cpchung_example():
with ThreadPoolExecutor(max_workers=1) as executor:
a = executor.submit(f, 2, 3)
b = executor.submit(f, 4, 5)
c = executor.submit(f, a.result(), b.result())
print(c.result())
if __name__ == '__main__':
pool = ProcessPoolExecutor(3)
kaya3_example()
cpchung_example()
# exception_handling() # not working, still wip

Related

concurrent.futures.as_completed(...) left hanging after jobs have been submitted to ProcessPoolExecutor

My code is similar to the example below. jobs1 and jobs2 would be calls do different functions: one is camelot-py::read_pdf and another is a call to a library that makes a (blocking) request.
from concurrent import futures
import time
n =200
t0 = time.time()
def f(x):
t=x
while x < 100:
x += 1
print("f\t", t)
return x
def h(jobs):
all = []
for job in futures.as_completed(jobs):
f_res = job.result()
all.append(f_res)
print("h\t", f_res)
return all
def g(x, p_executor: futures.ProcessPoolExecutor):
# IO-bound task except for calling f/h
time.sleep(1)
jobs1 = [p_executor.submit(f, x) for x in [x,x+1,x+2]]
jobs2 = [p_executor.submit(f, x) for x in [x,x-1,x-2]]
h(jobs1)
h(jobs2)
return x
with futures.ProcessPoolExecutor(max_workers=4) as p_executor:
with futures.ThreadPoolExecutor() as t_executor:
jobs = [t_executor.submit(g, x, p_executor) for x in range(n)]
all = set(range(n))
for job in futures.as_completed(jobs):
x = job.result()
all.remove(x)
print("main loop\t",x)
print("missing \t", all)
While this example runs without problems on my machine, my original code doesn't. The ProcessPoolExecutor is meant to run the function below, but, for some cases does not print "end", leaving my equivalent of the function h above (and g and the main) hanging.
def camelot_extraction(pickled_read_pdf, fixed_options, options):
unpickled_read_pdf = pickle.loads(pickled_read_pdf)
print("start")
x = (options, unpickled_read_pdf(**fixed_options, **options))
print("end")
return x
I got rid of the ThreadPoolExecutor and am able to run the main loop sequentially with the expected results, but i don't understand how this can be happening in this alternative in which i make use of both the Thread and Process Pool executors.
Any idea about what might be happening?
Thanks a lot!

python setting state of a future

Is it bad practice to set the state of future to pass arguments?
Specifically using something like future.q = q to use q in the callback
from threading import Thread
from threading import RLock
from threading import current_thread
from concurrent.futures import Future
import time
import random
class NonBlockingQueue:
def __init__(self, max_size):
self.max_size = max_size
self.q = []
self.q_waiting_puts = []
self.q_waiting_gets = []
self.lock = RLock()
def enqueue(self, item):
future = None
with self.lock:
curr_size = len(self.q)
# queue is full so create a future for a put
# request
if curr_size == self.max_size:
future = Future()
self.q_waiting_puts.append(future)
else:
self.q.append(item)
# remember to resolve a pending future for
# a get request
if len(self.q_waiting_gets) != 0:
future_get = self.q_waiting_gets.pop(0)
future_get.set_result(self.q.pop(0))
return future
def retry_enqueue(future):
print("\nCallback invoked by thread {0}".format(current_thread().getName()))
item = future.item
q = future.q
new_future = q.enqueue(item)
if new_future is not None:
new_future.item = item
new_future.q = q
new_future.add_done_callback(retry_enqueue)
else:
print("\n{0} successfully added on a retry".format(item))
### MAIN CODE
def producer_thread(q):
item = 1
while 1:
future = q.enqueue(item)
if future is not None:
future.item = item
future.q = q
future.add_done_callback(retry_enqueue)
item += 1
# slow down the producer
time.sleep(random.randint(1, 3))
It is not a good idea to pass around arguments like this.
The reason is that in future (no pun), they could just disallow setting custom attributes on the Future object, which will break your code.
Better solution is to use functools.partial or lambda to pass extra arguments to the callback.
First, accept q as an argument in the retry_enqueue function:
def retry_enqueue(future, q): # accept 'q' argument
...
Example using functools.partial:
import functools
future.add_done_callback(functools.partial(retry_enqueue, q=q))
Example using lambda:
future.add_done_callback(lambda future: retry_enqueue(future, q))

*why* does multiprocessing serialize my function and closure?

According to https://docs.python.org/3/library/multiprocessing.html
multiprocessing forks (for *nix) to create a worker process to execute tasks. We can verify this by setting up a global variable in a module prior to the fork.
If the worker function imports that module and finds the variable present, then the process memory has been copied. And so it is:
import os
def f(x):
import sys
return sys._mypid # <<< value is returned by subprocess!
def set_state():
import sys
sys._mypid = os.getpid()
def g():
from multiprocessing import Pool
pool = Pool(4)
try:
for z in pool.imap(f, range(1000)):
print(z)
finally:
pool.close()
pool.join()
if __name__=='__main__':
set_state()
g()
However, if things work this way, what business does multiprocessing have in serializing the work function, f?
In this example:
import os
def set_state():
import sys
sys._mypid = os.getpid()
def g():
def f(x):
import sys
return sys._mypid
from multiprocessing import Pool
pool = Pool(4)
try:
for z in pool.imap(f, range(1000)):
print(z)
finally:
pool.close()
pool.join()
if __name__=='__main__':
set_state()
g()
we get:
AttributeError: Can't pickle local object 'g.<locals>.f'
Stackoverflow and the internet is full of ways to work around this. (Python's standard pickle function can handle functions, but not function with closure data).
But why do we get here? A copy-on-write version of f is in the forked process's memory. Why does it need to be serialized at all?
Derp -- it has to be this way because:
pool = Pool(4) <<< processes created here
for z in pool.imap(f, range(1000)): <<< reference to function
FYI... anyone wanting to fork, where the new process has access to the function (and thereby avoids serializing the function), can follow this pattern:
import collections
import multiprocessing as mp
import os
import pickle
import threading
_STATUS_DATA = 0
_STATUS_ERR = 1
_STATUS_POISON = 2
Message = collections.namedtuple(
"Message",
["status",
"payload",
"sequence_id"
]
)
def parallel_map(
target,
args,
num_processes,
inq_maxsize=None,
outq_maxsize=None,
serialize=pickle.dumps,
deserialize=pickle.loads,
start_method="fork",
preserve_order=True,
):
"""
:param target: Target function
:param args: Iterable of single parameter arguments for target.
:param num_processes: Number of processes.
:param inq_maxsize:
:param outq_maxsize:
:param serialize:
:param deserialize:
:param start_method:
:param preserve_order: If true result are returns in the order received by args. Otherwise,
first result is returned first
:return:
"""
if inq_maxsize is None: inq_maxsize=10*num_processes
if outq_maxsize is None: outq_maxsize=10*num_processes
inq = mp.Queue(maxsize=inq_maxsize)
outq = mp.Queue(maxsize=outq_maxsize)
poison = serialize(Message(_STATUS_POISON, None, -1))
deserialize(poison) # Test
def work():
while True:
obj = inq.get()
# print("{} - GET .. OK".format(os.getpid()))
# inq.task_done()
try:
msg = deserialize(obj)
assert isinstance(msg, Message)
if msg.status==_STATUS_POISON:
outq.put(serialize(Message(_STATUS_POISON,None,msg.sequence_id)))
# print("{} - RETURN POISON .. OK".format(os.getpid()))
return
else:
args, kw = msg.payload
result = target(*args,**kw)
outq.put(serialize(Message(_STATUS_DATA,result,msg.sequence_id)))
except Exception as e:
try:
outq.put(serialize(Message(_STATUS_ERR,e,msg.sequence_id)))
except Exception as e2:
try:
outq.put(serialize(Message(_STATUS_ERR,None,-1)))
# outq.put(serialize(1,Exception("Unable to serialize response")))
# TODO. Log exception
except Exception as e3:
pass
if start_method == "thread":
_start_method = threading.Thread
else:
_start_method = mp.get_context('fork').Process
processes = [
_start_method(
target=work,
name="parallel_map.work"
)
for _ in range(num_processes)]
for p in processes:
p.start()
quitting = []
def quit_processes():
if not quitting:
quitting.append(1)
# Send poison pills - kill child processes
for _ in range(num_processes):
inq.put(poison)
nsent = [0]
def send():
# Send the data
for seq_id, arg in enumerate(args):
obj = ((arg,), {})
inq.put(serialize(Message(_STATUS_DATA, obj, seq_id)))
nsent[0] += 1
quit_processes()
# Publish
sender = threading.Thread(
target=send,
name="parallel_map.sender",
daemon=True)
sender.start()
try:
# Consume
nquit = [0]
buffer = {}
nyielded = 0
while True:
result = outq.get() # Waiting here
# outq.task_done()
msg = deserialize(result)
assert isinstance(msg, Message)
if msg.status == _STATUS_POISON:
nquit[0]+=1
# print(">>> QUIT ACK {}".format(nquit[0]))
if nquit[0]>=num_processes:
break
else:
assert msg.sequence_id>=0
if preserve_order:
buffer[msg.sequence_id] = msg
while True:
if nyielded not in buffer:
break
msg = buffer.pop(nyielded)
nyielded += 1
if msg.status==_STATUS_ERR:
if isinstance(msg.payload, Exception):
raise msg.payload
else:
raise Exception("Unexpected exception")
else:
assert msg.status==_STATUS_DATA
yield msg.payload
else:
if msg.status==_STATUS_ERR:
if isinstance(msg.payload, Exception):
raise msg.payload
else:
raise Exception("Unexpected exception")
else:
assert msg.status==_STATUS_DATA
yield msg.payload
# if nyielded == nsent:
# break
except Exception as e:
raise
finally:
if not quitting:
quit_processes()
sender.join()
for p in processes:
p.join()
def f(x):
time.sleep(0.01)
if x ==-1:
raise Exception("Boo")
return x
Usage:
def f(x):
time.sleep(0.01)
if x ==-1:
raise Exception("Boo")
return x
for result in parallel_map(target=f, <<< not serialized
args=range(100),
num_processes=8,
start_method="fork"):
pass
... with that caveat: for every thread you have in your program when you fork, a puppy dies.

Can I pass a method to apply_async or map in python multiprocessing?

How can I get the following to work? The main point is that I want to run a method (and not a function) asynchronously.
from multiprocessing import Pool
class Async:
def __init__(self, pool):
self.pool = pool
self.run()
def run(self):
p.apply_async(self.f, (10, ))
def f(self, x):
print x*x
if __name__ == '__main__':
p = Pool(5)
a = Async(p)
p.close()
p.join()
This prints nothing.
The problem appears to be due to the fact that multiprocessing needs to pickle self.f while bound methods are not picklable. There is a discussion on how to solve the problem here.
The apply_async apparently creates an exception which is put inside the future returned. That's why nothing is printed. If a get is executed on the future, then the exception is raised.
Its definitely possible to thread class methods using a threadpool in python 2 - the following programme did what I would expect.
#!/usr/bin/env python
from multiprocessing.pool import ThreadPool
class TestAsync():
def __init__(self):
pool = ThreadPool(processes = 2)
async_completions = []
for a in range(2):
async_completions.append(pool.apply_async(self.print_int, ( a,)))
for completion in async_completions:
res = completion.get()
print("res = %d" % res)
def print_int(self, value):
print(value)
return (value*10)
a = TestAsync()

Multiprocessing: How to use Pool.map on a function defined in a class?

When I run something like:
from multiprocessing import Pool
p = Pool(5)
def f(x):
return x*x
p.map(f, [1,2,3])
it works fine. However, putting this as a function of a class:
class calculate(object):
def run(self):
def f(x):
return x*x
p = Pool()
return p.map(f, [1,2,3])
cl = calculate()
print cl.run()
Gives me the following error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/sw/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/sw/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/sw/lib/python2.6/multiprocessing/pool.py", line 225, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I've seen a post from Alex Martelli dealing with the same kind of problem, but it wasn't explicit enough.
I could not use the code posted so far because code using "multiprocessing.Pool" do not work with lambda expressions and code not using "multiprocessing.Pool" spawn as many processes as there are work items.
I adapted the code s.t. it spawns a predefined amount of workers and only iterates through the input list if there exists an idle worker. I also enabled the "daemon" mode for the workers s.t. ctrl-c works as expected.
import multiprocessing
def fun(f, q_in, q_out):
while True:
i, x = q_in.get()
if i is None:
break
q_out.put((i, f(x)))
def parmap(f, X, nprocs=multiprocessing.cpu_count()):
q_in = multiprocessing.Queue(1)
q_out = multiprocessing.Queue()
proc = [multiprocessing.Process(target=fun, args=(f, q_in, q_out))
for _ in range(nprocs)]
for p in proc:
p.daemon = True
p.start()
sent = [q_in.put((i, x)) for i, x in enumerate(X)]
[q_in.put((None, None)) for _ in range(nprocs)]
res = [q_out.get() for _ in range(len(sent))]
[p.join() for p in proc]
return [x for i, x in sorted(res)]
if __name__ == '__main__':
print(parmap(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8]))
Multiprocessing and pickling is broken and limited unless you jump outside the standard library.
If you use a fork of multiprocessing called pathos.multiprocesssing, you can directly use classes and class methods in multiprocessing's map functions. This is because dill is used instead of pickle or cPickle, and dill can serialize almost anything in python.
pathos.multiprocessing also provides an asynchronous map function… and it can map functions with multiple arguments (e.g. map(math.pow, [1,2,3], [4,5,6]))
See discussions:
What can multiprocessing and dill do together?
and:
http://matthewrocklin.com/blog/work/2013/12/05/Parallelism-and-Serialization
It even handles the code you wrote initially, without modification, and from the interpreter. Why do anything else that's more fragile and specific to a single case?
>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> class calculate(object):
... def run(self):
... def f(x):
... return x*x
... p = Pool()
... return p.map(f, [1,2,3])
...
>>> cl = calculate()
>>> print cl.run()
[1, 4, 9]
Get the code here:
https://github.com/uqfoundation/pathos
And, just to show off a little more of what it can do:
>>> from pathos.multiprocessing import ProcessingPool as Pool
>>>
>>> p = Pool(4)
>>>
>>> def add(x,y):
... return x+y
...
>>> x = [0,1,2,3]
>>> y = [4,5,6,7]
>>>
>>> p.map(add, x, y)
[4, 6, 8, 10]
>>>
>>> class Test(object):
... def plus(self, x, y):
... return x+y
...
>>> t = Test()
>>>
>>> p.map(Test.plus, [t]*4, x, y)
[4, 6, 8, 10]
>>>
>>> res = p.amap(t.plus, x, y)
>>> res.get()
[4, 6, 8, 10]
I also was annoyed by restrictions on what sort of functions pool.map could accept. I wrote the following to circumvent this. It appears to work, even for recursive use of parmap.
from multiprocessing import Process, Pipe
from itertools import izip
def spawn(f):
def fun(pipe, x):
pipe.send(f(x))
pipe.close()
return fun
def parmap(f, X):
pipe = [Pipe() for x in X]
proc = [Process(target=spawn(f), args=(c, x)) for x, (p, c) in izip(X, pipe)]
[p.start() for p in proc]
[p.join() for p in proc]
return [p.recv() for (p, c) in pipe]
if __name__ == '__main__':
print parmap(lambda x: x**x, range(1, 5))
There is currently no solution to your problem, as far as I know: the function that you give to map() must be accessible through an import of your module. This is why robert's code works: the function f() can be obtained by importing the following code:
def f(x):
return x*x
class Calculate(object):
def run(self):
p = Pool()
return p.map(f, [1,2,3])
if __name__ == '__main__':
cl = Calculate()
print cl.run()
I actually added a "main" section, because this follows the recommendations for the Windows platform ("Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects").
I also added an uppercase letter in front of Calculate, so as to follow PEP 8. :)
The solution by mrule is correct but has a bug: if the child sends back a large amount of data, it can fill the pipe's buffer, blocking on the child's pipe.send(), while the parent is waiting for the child to exit on pipe.join(). The solution is to read the child's data before join()ing the child. Furthermore the child should close the parent's end of the pipe to prevent a deadlock. The code below fixes that. Also be aware that this parmap creates one process per element in X. A more advanced solution is to use multiprocessing.cpu_count() to divide X into a number of chunks, and then merge the results before returning. I leave that as an exercise to the reader so as not to spoil the conciseness of the nice answer by mrule. ;)
from multiprocessing import Process, Pipe
from itertools import izip
def spawn(f):
def fun(ppipe, cpipe,x):
ppipe.close()
cpipe.send(f(x))
cpipe.close()
return fun
def parmap(f,X):
pipe=[Pipe() for x in X]
proc=[Process(target=spawn(f),args=(p,c,x)) for x,(p,c) in izip(X,pipe)]
[p.start() for p in proc]
ret = [p.recv() for (p,c) in pipe]
[p.join() for p in proc]
return ret
if __name__ == '__main__':
print parmap(lambda x:x**x,range(1,5))
I've also struggled with this. I had functions as data members of a class, as a simplified example:
from multiprocessing import Pool
import itertools
pool = Pool()
class Example(object):
def __init__(self, my_add):
self.f = my_add
def add_lists(self, list1, list2):
# Needed to do something like this (the following line won't work)
return pool.map(self.f,list1,list2)
I needed to use the function self.f in a Pool.map() call from within the same class and self.f did not take a tuple as an argument. Since this function was embedded in a class, it was not clear to me how to write the type of wrapper other answers suggested.
I solved this problem by using a different wrapper that takes a tuple/list, where the first element is the function, and the remaining elements are the arguments to that function, called eval_func_tuple(f_args). Using this, the problematic line can be replaced by return pool.map(eval_func_tuple, itertools.izip(itertools.repeat(self.f), list1, list2)). Here is the full code:
File: util.py
def add(a, b): return a+b
def eval_func_tuple(f_args):
"""Takes a tuple of a function and args, evaluates and returns result"""
return f_args[0](*f_args[1:])
File: main.py
from multiprocessing import Pool
import itertools
import util
pool = Pool()
class Example(object):
def __init__(self, my_add):
self.f = my_add
def add_lists(self, list1, list2):
# The following line will now work
return pool.map(util.eval_func_tuple,
itertools.izip(itertools.repeat(self.f), list1, list2))
if __name__ == '__main__':
myExample = Example(util.add)
list1 = [1, 2, 3]
list2 = [10, 20, 30]
print myExample.add_lists(list1, list2)
Running main.py will give [11, 22, 33]. Feel free to improve this, for example eval_func_tuple could also be modified to take keyword arguments.
On another note, in another answers, the function "parmap" can be made more efficient for the case of more Processes than number of CPUs available. I'm copying an edited version below. This is my first post and I wasn't sure if I should directly edit the original answer. I also renamed some variables.
from multiprocessing import Process, Pipe
from itertools import izip
def spawn(f):
def fun(pipe,x):
pipe.send(f(x))
pipe.close()
return fun
def parmap(f,X):
pipe=[Pipe() for x in X]
processes=[Process(target=spawn(f),args=(c,x)) for x,(p,c) in izip(X,pipe)]
numProcesses = len(processes)
processNum = 0
outputList = []
while processNum < numProcesses:
endProcessNum = min(processNum+multiprocessing.cpu_count(), numProcesses)
for proc in processes[processNum:endProcessNum]:
proc.start()
for proc in processes[processNum:endProcessNum]:
proc.join()
for proc,c in pipe[processNum:endProcessNum]:
outputList.append(proc.recv())
processNum = endProcessNum
return outputList
if __name__ == '__main__':
print parmap(lambda x:x**x,range(1,5))
I know that this question was asked 8 years and 10 months ago but I want to present you my solution:
from multiprocessing import Pool
class Test:
def __init__(self):
self.main()
#staticmethod
def methodForMultiprocessing(x):
print(x*x)
def main(self):
if __name__ == "__main__":
p = Pool()
p.map(Test.methodForMultiprocessing, list(range(1, 11)))
p.close()
TestObject = Test()
You just need to make your class function into a static method. But it's also possible with a class method:
from multiprocessing import Pool
class Test:
def __init__(self):
self.main()
#classmethod
def methodForMultiprocessing(cls, x):
print(x*x)
def main(self):
if __name__ == "__main__":
p = Pool()
p.map(Test.methodForMultiprocessing, list(range(1, 11)))
p.close()
TestObject = Test()
Tested in Python 3.7.3
I know this was asked over 6 years ago now, but just wanted to add my solution, as some of the suggestions above seem horribly complicated, but my solution was actually very simple.
All I had to do was wrap the pool.map() call to a helper function. Passing the class object along with args for the method as a tuple, which looked a bit like this.
def run_in_parallel(args):
return args[0].method(args[1])
myclass = MyClass()
method_args = [1,2,3,4,5,6]
args_map = [ (myclass, arg) for arg in method_args ]
pool = Pool()
pool.map(run_in_parallel, args_map)
I took klaus se's and aganders3's answer, and made a documented module that is more readable and holds in one file. You can just add it to your project. It even has an optional progress bar !
"""
The ``processes`` module provides some convenience functions
for using parallel processes in python.
Adapted from http://stackoverflow.com/a/16071616/287297
Example usage:
print prll_map(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8], 32, verbose=True)
Comments:
"It spawns a predefined amount of workers and only iterates through the input list
if there exists an idle worker. I also enabled the "daemon" mode for the workers so
that KeyboardInterupt works as expected."
Pitfalls: all the stdouts are sent back to the parent stdout, intertwined.
Alternatively, use this fork of multiprocessing:
https://github.com/uqfoundation/multiprocess
"""
# Modules #
import multiprocessing
from tqdm import tqdm
################################################################################
def apply_function(func_to_apply, queue_in, queue_out):
while not queue_in.empty():
num, obj = queue_in.get()
queue_out.put((num, func_to_apply(obj)))
################################################################################
def prll_map(func_to_apply, items, cpus=None, verbose=False):
# Number of processes to use #
if cpus is None: cpus = min(multiprocessing.cpu_count(), 32)
# Create queues #
q_in = multiprocessing.Queue()
q_out = multiprocessing.Queue()
# Process list #
new_proc = lambda t,a: multiprocessing.Process(target=t, args=a)
processes = [new_proc(apply_function, (func_to_apply, q_in, q_out)) for x in range(cpus)]
# Put all the items (objects) in the queue #
sent = [q_in.put((i, x)) for i, x in enumerate(items)]
# Start them all #
for proc in processes:
proc.daemon = True
proc.start()
# Display progress bar or not #
if verbose:
results = [q_out.get() for x in tqdm(range(len(sent)))]
else:
results = [q_out.get() for x in range(len(sent))]
# Wait for them to finish #
for proc in processes: proc.join()
# Return results #
return [x for i, x in sorted(results)]
################################################################################
def test():
def slow_square(x):
import time
time.sleep(2)
return x**2
objs = range(20)
squares = prll_map(slow_square, objs, 4, verbose=True)
print "Result: %s" % squares
EDIT: Added #alexander-mcfarlane suggestion and a test function
Functions defined in classes (even within functions within classes) don't really pickle. However, this works:
def f(x):
return x*x
class calculate(object):
def run(self):
p = Pool()
return p.map(f, [1,2,3])
cl = calculate()
print cl.run()
I modified klaus se's method because while it was working for me with small lists, it would hang when the number of items was ~1000 or greater. Instead of pushing the jobs one at a time with the None stop condition, I load up the input queue all at once and just let the processes munch on it until it's empty.
from multiprocessing import cpu_count, Queue, Process
def apply_func(f, q_in, q_out):
while not q_in.empty():
i, x = q_in.get()
q_out.put((i, f(x)))
# map a function using a pool of processes
def parmap(f, X, nprocs = cpu_count()):
q_in, q_out = Queue(), Queue()
proc = [Process(target=apply_func, args=(f, q_in, q_out)) for _ in range(nprocs)]
sent = [q_in.put((i, x)) for i, x in enumerate(X)]
[p.start() for p in proc]
res = [q_out.get() for _ in sent]
[p.join() for p in proc]
return [x for i,x in sorted(res)]
Edit: unfortunately now I am running into this error on my system: Multiprocessing Queue maxsize limit is 32767, hopefully the workarounds there will help.
You can run your code without any issues if you somehow manually ignore the Pool object from the list of objects in the class because it is not pickleable as the error says. You can do this with the __getstate__ function (look here too) as follow. The Pool object will try to find the __getstate__ and __setstate__ functions and execute them if it finds it when you run map, map_async etc:
class calculate(object):
def __init__(self):
self.p = Pool()
def __getstate__(self):
self_dict = self.__dict__.copy()
del self_dict['p']
return self_dict
def __setstate__(self, state):
self.__dict__.update(state)
def f(self, x):
return x*x
def run(self):
return self.p.map(self.f, [1,2,3])
Then do:
cl = calculate()
cl.run()
will give you the output:
[1, 4, 9]
I've tested the above code in Python 3.x and it works.
Here is my solution, which I think is a bit less hackish than most others here. It is similar to nightowl's answer.
someclasses = [MyClass(), MyClass(), MyClass()]
def method_caller(some_object, some_method='the method'):
return getattr(some_object, some_method)()
othermethod = partial(method_caller, some_method='othermethod')
with Pool(6) as pool:
result = pool.map(othermethod, someclasses)
This may not be a very good solution but in my case, I solve it like this.
from multiprocessing import Pool
def foo1(data):
self = data.get('slf')
lst = data.get('lst')
return sum(lst) + self.foo2()
class Foo(object):
def __init__(self, a, b):
self.a = a
self.b = b
def foo2(self):
return self.a**self.b
def foo(self):
p = Pool(5)
lst = [1, 2, 3]
result = p.map(foo1, (dict(slf=self, lst=lst),))
return result
if __name__ == '__main__':
print(Foo(2, 4).foo())
I had to pass self to my function as I have to access attributes and functions of my class through that function. This is working for me. Corrections and suggestions are always welcome.
Here is a boilerplate I wrote for using multiprocessing Pool in python3, specifically python3.7.7 was used to run the tests. I got my fastest runs using imap_unordered. Just plug in your scenario and try it out. You can use timeit or just time.time() to figure out which works best for you.
import multiprocessing
import time
NUMBER_OF_PROCESSES = multiprocessing.cpu_count()
MP_FUNCTION = 'starmap' # 'imap_unordered' or 'starmap' or 'apply_async'
def process_chunk(a_chunk):
print(f"processig mp chunk {a_chunk}")
return a_chunk
map_jobs = [1, 2, 3, 4]
result_sum = 0
s = time.time()
if MP_FUNCTION == 'imap_unordered':
pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
for i in pool.imap_unordered(process_chunk, map_jobs):
result_sum += i
elif MP_FUNCTION == 'starmap':
pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
try:
map_jobs = [(i, ) for i in map_jobs]
result_sum = pool.starmap(process_chunk, map_jobs)
result_sum = sum(result_sum)
finally:
pool.close()
pool.join()
elif MP_FUNCTION == 'apply_async':
with multiprocessing.Pool(processes=NUMBER_OF_PROCESSES) as pool:
result_sum = [pool.apply_async(process_chunk, [i, ]).get() for i in map_jobs]
result_sum = sum(result_sum)
print(f"result_sum is {result_sum}, took {time.time() - s}s")
In the above scenario imap_unordered actually seems to perform the worst for me. Try out your case and benchmark it on the machine you plan to run it on. Also read up on Process Pools. Cheers!
I'm not sure if this approach has been taken but a work around i'm using is:
from multiprocessing import Pool
t = None
def run(n):
return t.f(n)
class Test(object):
def __init__(self, number):
self.number = number
def f(self, x):
print x * self.number
def pool(self):
pool = Pool(2)
pool.map(run, range(10))
if __name__ == '__main__':
t = Test(9)
t.pool()
pool = Pool(2)
pool.map(run, range(10))
Output should be:
0
9
18
27
36
45
54
63
72
81
0
9
18
27
36
45
54
63
72
81
class Calculate(object):
# Your instance method to be executed
def f(self, x, y):
return x*y
if __name__ == '__main__':
inp_list = [1,2,3]
y = 2
cal_obj = Calculate()
pool = Pool(2)
results = pool.map(lambda x: cal_obj.f(x, y), inp_list)
There is a possibility that you would want to apply this function for each different instance of the class. Then here is the solution for that also
class Calculate(object):
# Your instance method to be executed
def __init__(self, x):
self.x = x
def f(self, y):
return self.x*y
if __name__ == '__main__':
inp_list = [Calculate(i) for i in range(3)]
y = 2
pool = Pool(2)
results = pool.map(lambda x: x.f(y), inp_list)
From http://www.rueckstiess.net/research/snippets/show/ca1d7d90 and http://qingkaikong.blogspot.com/2016/12/python-parallel-method-in-class.html
We can make an external function and seed it with the class self object:
from joblib import Parallel, delayed
def unwrap_self(arg, **kwarg):
return square_class.square_int(*arg, **kwarg)
class square_class:
def square_int(self, i):
return i * i
def run(self, num):
results = []
results = Parallel(n_jobs= -1, backend="threading")\
(delayed(unwrap_self)(i) for i in zip([self]*len(num), num))
print(results)
OR without joblib:
from multiprocessing import Pool
import time
def unwrap_self_f(arg, **kwarg):
return C.f(*arg, **kwarg)
class C:
def f(self, name):
print 'hello %s,'%name
time.sleep(5)
print 'nice to meet you.'
def run(self):
pool = Pool(processes=2)
names = ('frank', 'justin', 'osi', 'thomas')
pool.map(unwrap_self_f, zip([self]*len(names), names))
if __name__ == '__main__':
c = C()
c.run()
To implement multiprocessing in aws lambda we have two ways.
Note : Threadpool doesn't work in aws lambda
use the example solution which is provided by aws team
please use this link https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/
use this package https://pypi.org/project/lambda-multiprocessing/
i have implemented my lambda function with both the solution and both is working fine can't share my code here but this 2 links will help you for sure.
i find 2 nd way more easy to implement.
There are also some libraries to make this easier, for example autothread (only for Python 3.6 and up):
import autothread
class calculate(object):
def run(self):
#autothread.multiprocessed()
def f(x: int):
return x*x
return f([1,2,3])
cl = calculate()
print(cl.run())
You can also take a look at lox.

Categories

Resources