python: how to apply callback function for multiprocess? - python

In my GUI application, I want to use multiprocessing to accelerate the calculation. Now, I can use multiprocessing, and collect the calculated result. Now, I want the subprocess can inform the main-process that the calculation is finished, but I can not find any solution.
My multiprocessing looks like:
import multiprocessing
from multiprocessing import Process
import numpy as np
class MyProcess(Process):
def __init__(self,name, array):
super(MyProcess,self).__init__()
self.name = name
self.array = array
recv_end, send_end = multiprocessing.Pipe(False)
self.recv = recv_end
self.send = send_end
def run(self):
s = 0
for a in self.array:
s += a
self.send.send(s)
def getResult(self):
return self.recv.recv()
if __name__ == '__main__':
process_list = []
for i in range(5):
a = np.random.random(10)
print(i, ' correct result: ', a.sum())
p = MyProcess(str(i), a)
p.start()
process_list.append(p)
for p in process_list:
p.join()
for p in process_list:
print(p.name, ' subprocess result: ', p.getResult())
I want the sub-process can inform the main-process that the calculation is finish so that I can show the result in my GUI.
Any suggestion is appreciated~~~

Assuming you would like to do something with a result (the sum of an numpy array, in your case) as soon as it has been generated, then I would use a multiprocessing pool with method multiprocessing.pool.Pool with method imap_unordered, which will return results in the order generated. In this case you need to pass to your worker function the index of the array in the list of arrays to be processed along with the array itself and have it return back this index along with the array's sum since this is the only way for the main process to know for which array the sum has been generated:
from multiprocessing import Pool, cpu_count
import numpy as np
def compute_sum(tpl):
# unpack tuple:
i, array = tpl
s = 0
for a in array:
s += a
return i, s
if __name__ == '__main__':
array_list = [np.random.random(10) for _ in range(5)]
n = len(array_list)
pool_size = min(cpu_count(), n)
pool = Pool(pool_size)
# get result as soon as it has been returned:
for i, s in pool.imap_unordered(compute_sum, zip(range(n), array_list)):
print(f'correct result {i}: {array_list[i].sum()}, actual result: {s}')
pool.close()
pool.join()
Prints:
correct result 0: 4.760033809335711, actual result: 4.76003380933571
correct result 1: 5.486818812843256, actual result: 5.486818812843257
correct result 2: 5.400374562564179, actual result: 5.400374562564179
correct result 3: 4.079376706247242, actual result: 4.079376706247242
correct result 4: 4.20860716467263, actual result: 4.20860716467263
In the above run the actual results generated happened to be in the same order in which the tasks were submitted. To demonstrate that in general the results could be generated in arbitrary order based on how long it takes for the worker function to compute its result, we introduce some randomness to the processing time:
from multiprocessing import Pool, cpu_count
import numpy as np
def compute_sum(tpl):
import time
# unpack tuple:
i, array = tpl
# results will be generated in random order:
time.sleep(np.random.sample())
s = 0
for a in array:
s += a
return i, s
if __name__ == '__main__':
array_list = [np.random.random(10) for _ in range(5)]
n = len(array_list)
pool_size = min(cpu_count(), n)
pool = Pool(pool_size)
# get result as soon as it has been returned:
for i, s in pool.imap_unordered(compute_sum, zip(range(n), array_list)):
print(f'correct result {i}: {array_list[i].sum()}, actual result: {s}')
pool.close()
pool.join()
Prints:
correct result 4: 6.662288433360379, actual result: 6.66228843336038
correct result 0: 3.352901187256162, actual result: 3.3529011872561614
correct result 3: 5.836344458981557, actual result: 5.836344458981557
correct result 2: 2.9950208717729656, actual result: 2.9950208717729656
correct result 1: 5.144743159869513, actual result: 5.144743159869513
If you are satisfied with getting back results in task-submission rather than task-completion order, then use method imap and there is no need to pass back and forth array indices:
from multiprocessing import Pool, cpu_count
import numpy as np
def compute_sum(array):
s = 0
for a in array:
s += a
return s
if __name__ == '__main__':
array_list = [np.random.random(10) for _ in range(5)]
n = len(array_list)
pool_size = min(cpu_count(), n)
pool = Pool(pool_size)
for i, s in enumerate(pool.imap(compute_sum, array_list)):
print(f'correct result {i}: {array_list[i].sum()}, actual result: {s}')
pool.close()
pool.join()
Prints:
correct result 0: 4.841913985702773, actual result: 4.841913985702773
correct result 1: 4.836923014762733, actual result: 4.836923014762733
correct result 2: 4.91242274200897, actual result: 4.91242274200897
correct result 3: 4.701913574838348, actual result: 4.701913574838349
correct result 4: 5.813666896917504, actual result: 5.813666896917503
Update
You can also use method apply_async specifying a callback function to be invoked when a result is returned from your worker function, compute_sum. apply_async returns a multiprocessing.pool.AsyncResult whose get method will block until the task has completed and returns the return value from the completed task. But here, since, we are using a callback function that will automatically be called with the result when the task completes instead of calling method multiprocessing.pool.AsyncResult.get, there is no need to save the AsyncResult instances. We also rely on calling methods multiprocessing.pool.Pool.close() followed by multiprocessing.pool.Pool.join() to block until all submitted tasks have completed and results returned:
from multiprocessing import Pool, cpu_count
import numpy as np
from functools import partial
def compute_sum(i, array):
s = 0
for a in array:
s += a
return i, s
def calculation_display(result, t):
# Unpack returned tuple:
i, s = t
print(f'correct result {i}: {array_list[i].sum()}, actual result: {s}')
result[i] = s
if __name__ == '__main__':
global array_list
array_list = [np.random.random(10) for _ in range(5)]
n = len(array_list)
result = [0] * n
pool_size = min(cpu_count(), n)
pool = Pool(pool_size)
# Get result as soon as it has been returned.
# Pass to our callback as the first argument the results list.
# The return value will now be the second argument:
my_callback = partial(calculation_display, result)
for i, array in enumerate(array_list):
pool.apply_async(compute_sum, args=(i, array), callback=my_callback)
# Wait for all submitted tasks to complete:
pool.close()
pool.join()
print('results:', result)
Prints:
correct result 0: 5.381579338696546, actual result: 5.381579338696546
correct result 1: 3.8780497856741274, actual result: 3.8780497856741274
correct result 2: 4.548733927791488, actual result: 4.548733927791488
correct result 3: 5.048921365623381, actual result: 5.048921365623381
correct result 4: 4.852415747983676, actual result: 4.852415747983676
results: [5.381579338696546, 3.8780497856741274, 4.548733927791488, 5.048921365623381, 4.852415747983676]

Related

Make pool.apply_async wait to finish current jobs before submitting new jobs

I have a nested for loop which I have used multiprocessing.Pool to parallize my inner loop. Here is an example code:
for i in range(N):
for k in range(L):
x[k] = Do_Something(x[k])
So as you can see, every iteration of i depends on the previous interation while the k for loop is "embarassingly" parallel with no dependance on which k finishes first. This naturally pointed me towards using appy_async.
The parallelize inner loop code looks something like this:
pool = mp.Pool(Nworkers)
for i in range(N):
for k in range(L):
pool.apply_async(Do_Something, args=(k), callback=getresults)
pool.join()
pool.close()
Writing the code this way screws up the order on the i loop since async does not wait for the k jobs to finish before moving on to the next i loop. The question is: Is there a way to pause the async until all the jobs from the k loops finish before moving on to the next iteration of i?. Using apply_async is benifical here since the callback allows me to store the results in a given order. I saw some other answers, here, and here, but they uses alternative solutions like map, which seems like a valid alternative way, but I'd like to stick to apply_async here and wait for the jobs to finish...
I've also tried to stop and reinitalize workers on every iternation of i, but the overhead from mp.Pool() at every i is not very efficent... Heres what I tried:
for i in range(N):
pool = mp.Pool(Nworkers)
for k in range(L):
pool.apply_async(Do_Something, args=(k), callback=getresults)
pool.join()
pool.close()
There are a few of issues with your current code:
The args parameter to the apply_async method takes an iterable, e.g. a list or tuple. You have specified args=(k) but (k) is not a tuple, it is just a parenthesized expression. You need args=(k,).
Your calls to pool.join() and pool.close() are in the wrong order.
Your non-parallelized version specifies x[k] = Do_Something(x[k]) where you are passing to the function argument x[k]. In your parallelized version you are just passing k. Which is correct? I will assume the former.
As you have already determined, x[k] starts out with some value and then when you invoke x[k] = Do_Something(x[k]) x[k] ends up with a new value which will then be passed to Do_Something on the next iteration of variable i. Therefore, you do need to submit tasks and process results in a loop for each value of i. But you do not want to use a callback, which gets results in completion order instead of submission order. The call to apply_async returns a multiprocessing.AsyncResult instance. If you call method get on this instance it blocks until the submitted task ends and then fetches the result:
pool = mp.Pool(Nworkers)
# Don't use a callback with apply_async, which
# gets results in completion order instead of
# submission order:
for i in range(N):
# We must wait for all the return values are assigned
# to x[k] before submitting tasks for the next i value:
async_results = [
pool.apply_async(Do_Something, args=(x[k],))
for k in range(L)
]
# Set the new values of x[k] for the next iteration
# of the outer loop:
for k in range(L):
x[k] = async_results[k].get()
pool.join()
pool.close()
But simpler would be to use the imap method:
pool = mp.Pool(Nworkers)
for i in range(N):
# We must wait for all the return values are assigned
# to x[k] before submitting tasks for the next i value:
for k, result in enumerate(pool.imap(Do_Something, x)):
x[k] = result
pool.join()
pool.close()
Or even the map method, which will create a new list and is less memory efficient (not an issue unless you are dealing with a very large x):
pool = mp.Pool(Nworkers)
for i in range(N):
# New x list:
x = pool.map(Do_Something, x)
pool.join()
pool.close()
Here is a minimal, reproducible
example:
# Successively square values. For N = 3, we are
# essentially raising a value to the 2 ** 3 = 8th power:
def Do_Something(value):
"""
Square a value.
"""
return value ** 2
# Required for Windows:
if __name__ == '__main__':
import multiprocessing as mp
x = [2, 3]
L = len(x)
N = 3
Nworkers = min(L, mp.cpu_count())
pool = mp.Pool(Nworkers)
for i in range(N):
# New x list:
x = pool.map(Do_Something, x)
print(x)
pool.close()
pool.join()
Prints:
[256, 6561]
But Better Yet Is ...
If you can modify Do_Something, then move the looping on i to this function. In this way you are submitting fewer but more CPU-intensive tasks, which is what you would like:
# Successively square values. For N = 3, we are
# essentially raising a value to the 2 ** 3 = 8th power:
N = 3
def Do_Something(value):
"""
Square a value N times:
"""
for _ in range(N):
value = value ** 2
return value
# Required for Windows:
if __name__ == '__main__':
import multiprocessing as mp
x = [2, 3]
L = len(x)
Nworkers = min(L, mp.cpu_count())
pool = mp.Pool(Nworkers)
# New x list:
x = pool.map(Do_Something, x)
print(x)
pool.close()
pool.join()
if you cannot modify Do_Something, then create a new function, Do_Something_N:
# Successively square values. For N = 3, we are
# essentially raising a value to the 2 ** 3 = 8th power:
N = 3
def Do_Something(value):
"""
Square a value.
"""
return value ** 2
def Do_Something_N(value):
for _ in range(N):
value = Do_Something(value)
return value
# Required for Windows:
if __name__ == '__main__':
import multiprocessing as mp
x = [2, 3]
L = len(x)
Nworkers = min(len(x), mp.cpu_count())
pool = mp.Pool(Nworkers)
# New x list:
x = pool.map(Do_Something_N, x)
print(x)
pool.close()
pool.join()

join() output from multiprocessing when using tqdm for progress bar

I'm using a construct similar to this example to run my processing in parallel with a progress bar courtesy of tqdm...
from multiprocessing import Pool
import time
from tqdm import *
def _foo(my_number):
square = my_number * my_number
return square
if __name__ == '__main__':
with Pool(processes=2) as p:
max_ = 30
with tqdm(total=max_) as pbar:
for _ in p.imap_unordered(_foo, range(0, max_)):
pbar.update()
results = p.join() ## My attempt to combine results
results is always NoneType though, and I can not work out how to get my results combined. I understand that with ...: will close what it is working with on completion automatically.
I've tried doing away with the outer with:
if __name__ == '__main__':
max_ = 10
p = Pool(processes=8)
with tqdm(total=max_) as pbar:
for _ in p.imap_unordered(_foo, range(0, max_)):
pbar.update()
p.close()
results = p.join()
print(f"Results : {results}")
Stumped as to how to join() my results?
Your call to p.join() just waits for all the pool processes to end and returns None. This call is actually unnecessary since you are using the pool as a context manager, that is you have specified with Pool(processes=2) as p:). When that block terminates an implicit call is made to p.terminate(), which immediately terminates the pool processes and any tasks that may be running or queued up to run (there are none in your case).
It is, in fact, iterating the iterator returned by the call to p.imap_unordered that returns each return value from your worker function, _foo. But since you are using method imap_unordered, the results returned may not be in submission order. In other words, you cannot assume that the return values will be in succession 0, 1, , 4, 9, etc. There are many ways to handle this, such as having your worker function return the original argument along with the squared value:
from multiprocessing import Pool
import time
from tqdm import *
def _foo(my_number):
square = my_number * my_number
return my_number, square # return the argunent along with the result
if __name__ == '__main__':
with Pool(processes=2) as p:
max_ = 30
results = [None] * 30; # preallocate the resulys array
with tqdm(total=max_) as pbar:
for x, result in p.imap_unordered(_foo, range(0, max_)):
results[x] = result
pbar.update()
print(results)
The second way is to not use imap_unordered, but rather apply_async with a callback function. The disadvantage of this is that for large iterables you do not have the option of specifying a chunksize argument as you do with imap_unordered:
from multiprocessing import Pool
import time
from tqdm import *
def _foo(my_number):
square = my_number * my_number
return square
if __name__ == '__main__':
def my_callback(_): # ignore result
pbar.update() # update progress bar when a result is produced
with Pool(processes=2) as p:
max_ = 30
with tqdm(total=max_) as pbar:
async_results = [p.apply_async(_foo, (x,), callback=my_callback) for x in range(0, max_)]
# wait for all tasks to complete:
p.close()
p.join()
results = [async_result.get() for async_result in async_results]
print(results)

paralleling nested loops

I'm trying parallelize some nested loops using pool, moreover function have to return an array, but external array stays empty.
def calcul_T(m):
temp=[]
for n in range(0,N):
x = sym.Symbol('x')
y=sym.sin(x)
#.....some stuff.....
temp.append(y)
return temp
rt=[]
if __name__ == '__main__':
pool = Pool()
rt.append(pool.map(calcul_T, range(0,M)))
pool.close()
pool.join()
I expect getting at least array of arrays in order to make it 2-D array and then use it further, after if __name__ block
What do I wrong?
Use context manager:
from multiprocessing import Pool
def calcul_T(m):
temp=[]
for n in range(0,N):
x = sym.Symbol('x')
y=sym.sin(x)
#.....some stuff.....
temp.append(y)
return temp
rt=[]
if __name__ == '__main__':
with Pool(N_PROCESSES) as p:
rt = p.map(calcul_T, range(0,M))
EDIT
According to comment, accesing rt like normal 2D array works just fine (run in console, i changed calcul_T function for running this)
from multiprocessing import Pool
N = 10
M = 10
def calcul_T(m):
temp=[]
for n in range(0,N):
temp.append(n * m)
return temp
rt = []
if __name__ == '__main__':
with Pool(5) as p:
rt = p.map(calcul_T, range(0,M))
print(rt[8][8])

Implement merge_sort with multiprocessing solution

I tried to write a merge sort with multiprocessing solution
from heapq import merge
from multiprocessing import Process
def merge_sort1(m):
if len(m) < 2:
return m
middle = len(m) // 2
left = Process(target=merge_sort1, args=(m[:middle],))
left.start()
right = Process(target=merge_sort1, args=(m[middle:],))
right.start()
for p in (left, right):
p.join()
result = list(merge(left, right))
return result
Test it with arr
In [47]: arr = list(range(9))
In [48]: random.shuffle(arr)
It repost error:
In [49]: merge_sort1(arr)
TypeError: 'Process' object is not iterable
What's the problem with my code?
merge(left, right) tries to merge two processes, whereas you presumably want to merge the two lists that resulted from each process. Note that return value of the function passed to Process is lost; it is a different process, not just a different thread, and you can't very easily shuffle data back to parent, so Python doesn't do that, by default. You need to be explicit and code such a channel yourself. Fortunately, there are multiprocessing datatypes to help you; for example, multiprocessing.Pipe:
from heapq import merge
import random
import multiprocessing
def merge_sort1(m, send_end=None):
if len(m) < 2:
result = m
else:
middle = len(m) // 2
inputs = [m[:middle], m[middle:]]
pipes = [multiprocessing.Pipe(False) for _ in inputs]
processes = [multiprocessing.Process(target=merge_sort1, args=(input, send_end))
for input, (recv_end, send_end) in zip(inputs, pipes)]
for process in processes: process.start()
for process in processes: process.join()
results = [recv_end.recv() for recv_end, send_end in pipes]
result = list(merge(*results))
if send_end:
send_end.send(result)
else:
return result
arr = list(range(9))
random.shuffle(arr)
print(merge_sort1(arr))

Trying to copy back elements in Array works with integers but not with strings in Multiprocessing Python module

I am trying to write a process which does some computation on an Array filled with strings using the multiprocessing module. However, I am not able to get back the results. This is just a minimalist code example:
from multiprocessing import Process, Value, Array
from ctypes import c_char_p
# Process
def f(n, a):
for i in range(0,10):
a[i] = "test2".encode('latin-1')
if __name__ == '__main__':
# Set up array
arr = Array(c_char_p, range(10))
# Fill it with values
for i in range(0,10):
arr[i] = "test".encode('latin-1')
x = []
for i in range(0,10):
num = Value('d', float(i)*F)
p = Process(target=f, args=(num, arr,))
x.append(p)
p.start()
for p in x:
p.join()
# THis works
print(num.value)
# This will not give out anything
print(arr[0])
The last line won't print out anything, despite it being filled or altered.
The main thing that concerns me, is when changing the code to just simply using integers it will work:
from multiprocessing import Process, Value, Array
from ctypes import c_char_p
def f(n, a):
for i in range(0,10):
a[i] = 5
if __name__ == '__main__':
arr = Array('i',range(10))
for i in tqdm(range(0,10)):
arr[i] = 10
x = []
for i in range(0,10):
num = Value('d', float(i)*F)
p = Process(target=f, args=(num, arr,))
x.append(p)
p.start()
for p in x:
p.join()
print(num.value)
print(arr[0])
My Best guess is that this has something to do with the fact that the string array is acutally filled with char arrays and an integer is just one value, but I do not know how to fix this
This might answer your question, Basically the string array arr has an array of character pointers c_char_p, When the first process invokes the function f the character pointers are created in the context of itself but not in the other processes context, so eventually when the other processes tries to access the arr it will be invalid addresses.
In my case this seems to be working fine,
from multiprocessing import Process, Value, Array
from ctypes import c_char_p
values = ['test2438']*10
# Process
def f(n, a):
for i,s in enumerate(values):
a[i] = s
if __name__ == '__main__':
# Set up array
arr = Array(c_char_p, 10)
for i in range(0,10):
arr[i] = 'test'
# Fill it with values
x = []
for i in range(0,10):
num = Value('d', float(i))
p = Process(target=f, args=(num, arr,))
x.append(p)
p.start()
for p in x:
p.join()
# This will not give out anything
print(arr[:])

Categories

Resources