How to maintain global processes in a pool working recursively? - python

I want to implement a recursive parallel algorithm and I want a pool to be created only once and each time step do a job wait for all the jobs to finish and then call the processes again with inputs the previous outputs and then again the same at the next time step, etc.
My problem is that I have implemented a version where every time step I create and kill the pool, but this is extremely slow, even slower than the sequential version. When I try to implement a version where the pool is created only once at the beginning I got assertion error when I try to call join().
This is my code
def log_result(result):
tempx , tempb, u = result
X[:,u,np.newaxis], b[:,u,np.newaxis] = tempx , tempb
workers = mp.Pool(processes = 4)
for t in range(p,T):
count = 0 #==========This is only master's job=============
for l in range(p):
for k in range(4):
gn[count]=train[t-l-1,k]
count+=1
G = G*v + gn # gn.T#==================================
if __name__ == '__main__':
for i in range(4):
workers.apply_async(OULtraining, args=(train[t,i], X[:,i,np.newaxis], b[:,i,np.newaxis], i, gn), callback = log_result)
workers.join()
X and b are the matrices that I want to update directly at the master's memory.
What is wrong here and I get the assertion error?
Can I implement with the pool what I want or not?

You cannot join a pool that is not closed first, as join() will wait worker processes to terminate, not jobs to complete (https://docs.python.org/3.6/library/multiprocessing.html section 17.2.2.9).
But as this will close the pool, which is not what you want, you cannot use this. So join is out, and you need to implement a "wait until all jobs completed" by yourself.
One way of doing this without busy loops would be using a queue. You could also work with bounded semaphores, but they do not work on all operating systems.
counter = 0
lock_queue = multiprocessing.Queue()
counter_lock = multiprocessing.Lock()
def log_result(result):
tempx , tempb, u = result
X[:,u,np.newaxis], b[:,u,np.newaxis] = tempx , tempb
with counter_lock:
counter += 1
if counter == 4:
counter = 0
lock_queue.put(42)
workers = mp.Pool(processes = 4)
for t in range(p,T):
count = 0 #==========This is only master's job=============
for l in range(p):
for k in range(4):
gn[count]=train[t-l-1,k]
count+=1
G = G*v + gn # gn.T#==================================
if __name__ == '__main__':
counter = 0
for i in range(4):
workers.apply_async(OULtraining, args=(train[t,i], X[:,i,np.newaxis], b[:,i,np.newaxis], i, gn), callback = log_result)
lock_queue.get(block=True)
This resets a global counter before submitting jobs. As soon as a job is completed, you callback increments a global counter. When the counter hits 4 (your number of jobs), the callback knows it has processed the last result. Then a dummy message is sent in a queue. Your main program is waiting at Queue.get() for something to appear there.
This allows your main program to block until all jobs have completed, without closing down the pool.
If you replace multiprocessing.Pool with ProcessPoolExecutor from concurrent.futures, you can skip this part and use
concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)
to block until all submitted tasks have finished. From functional standpoint there is no difference between these. The concurrent.futures method is a couple of lines shorter but the result is exactly the same.

Related

Why is my parallel code slower than my serial code?

Recently started learning parallel on my own and I have next to no idea what I'm doing. Tried applying what I have learnt but I think I'm doing something wrong because my parallel code is taking a longer time to execute than my serial code. My PC is running a i7-9700. This is the original serial code in question
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
EDIT: This is the parallel code with time library. Initially didn't include it as I thought it was wrong so just wanted to see if anyone had ideas to parallize it instead
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
import time
start_time = time.time()
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
print (" %s seconds " % (time.time() - start_time))
results = [pool.apply(getMatrix, getColAsList, getCell)]
pool.close()
So I would agree that you are doing something wrong. I would say that your code is not parallelable.
For the code to be parallelable it has to be dividable into smaller pieces and it either has to be:
1, Independent, meaning when it runs it doesn't rely on other processes to do its job.
For example if I have a list with 1,000,000 objects that need to be processed. And I have 4 workers to process them with. Then give each worker 1/4 of the objects to process and then when they finish all objects have been processed. But worker 3 doesn't care if worker 1, 2 or 4 completed before or after it did. Nor does worker3 care about what worker 1, 2 or 4 returned or did. It actually shouldn't even know that there are any other workers out there.
2, Managed, meaning there is dependencies between workers but thats ok cause you have a main thread that coordinates the workers. Still though, workers shouldn't know or care about each other. Think of them as mindless muscle, they only do what you tell them to do. Not to think for themselves.
For example I have a list with 1,000,000 objects that need to be processed. First all objects need to go through func1 which returns something. Once ALL objects are done with func1 those results should then go into func2. So I create 4 workers, give each worker 1/4 of the objects and have them process them with func1 and return the results. I wait for all workers to finish processing the objects. Then I give each worker 1/4 of the results returned by func1 and have them process it with func2. And I can keep doing this as many times as I want. All I have to do is have the main thread coordinate the workers so they dont start when they aren't suppose too and tell them what and when to process.
Take this with a grain of salt as this is a simplified version of parallel processing.
Tip for parallel and concurrency
You shouldn't get user input in parallel. Only the main thread should handle that.
If your work load is light then you shouldn't use parallel processing.
If your task can't be divided up into smaller pieces then its not parallelable. But it can still be run on a background thread as a way of running something concurrently.
Concurrency Example:
If your task is long running and not parallelable, lets say it takes 10 minutes to complete. And it requires a user to give input. Then when the user gives input start the task on a worker. If the user gives input again 1 minute later then take that input and start the 2nd task on worker2. Input at 5 minutes start task3 on worker3. At the 10 minute mark task1 is complete. Because everything is running concurrently by the 15 minute mark all task are complete. That's 2x faster then running the tasks in serial which would take 30 minutes. However this is concurrency not parallel.

Multithreading threads of functionB does not start until thread of function A finish, but threads of functionA do Not wait for functionB

I am trying to set up multithreading of two functions. The execution of function B is dependent on function A finishing, so I want to wait until all the threads of function A are finished before running a thread of function B. But function A does not depend on function B, so while a thread of function B is running, I want to start running threads of function A. In this case, function B only needs 1 thread, but function A can have multiple.
Here is a code example of my best attempt. The code below creates a list in functionB of length 20, in 20 iteration with functionB. And within each of those 20 iterations, functionA goes through 200 iterations to make it's own temporary list firstList, which functionB uses to create a single item in secondList
Those 200 iterations of functionA can have multiple workers. functionB can only have one at a time. Within an iteration of k, functionB needs to wait until all 200 iterations of functionA is completed. But at iteration k+1 , functionA should continue with it's next iterations and Not wait for the iteration of functionB at iteration k.
import threading
maxthreads = 4
sema1 = threading.Semaphore(value=maxthreads)
maxthreads = 1
sema2 = threading.Semaphore(value=maxthreads)
def functionA( i ):
sema1.acquire()
firstList.append( i*2 )
sema1.release()
def functionB( j ):
sema2.acquire()
secondList.append( j + sum(firstList) )
sema2.release()
secondList = []
for k in range(20):
firstList = []
for n in range(0, 200):
thread = threading.Thread(target=functionA,args=(n, ))
thread.start()
thread = threading.Thread(target=functionB,args=(m, ))
thread.start()
How do I set up the threading such that the functionB function does not run until all the n iteration threads from FirstOne are completed, for each iteration of k,
but also that the functionA threads will proceed in their iterations of n for the next iteration of k+1, even if the single thread that is occupying functionB at k has not been completed? Since it's not necessarily for functionB at iteration k to finish, in order for threads of functionA at iteration k+1 to start their tasks.
Also note, functionB can only have one thread running at a time, where as functionA can have multiple threads.
Edit:
Dan D. posted a solution below which prevents both functions A and B from running until the other is finished, but I only need to prevent B from running until A is finished, A can run without B, I came up with this solution based on Dan D's solution
a_threads = []
secondList = []
for k in range(20):
firstList = []
for n in range(0, 200):
thread = threading.Thread(target=functionA,args=(n, ))
thread.start()
a_threads.append(thread)
for thread in a_threads:
thread.join()
a_threads = []
thread = threading.Thread(target=functionB,args=(m, ))
thread.start()
a_threads.append(thread)
so I do not do thread join at the end of the thread of function b starting, but I append it to a_threads. After a_threads are joined at the end of calling all the function b threads, I create a new empty a_threads list.
Would this be a working solution to my requirements?
To do that you join all the A threads before starting the B thread. And then join the B thread so that the next A threads aren't started until it ends.
secondList = []
for k in range(20):
firstList = []
a_threads = []
for n in range(0, 200):
thread = threading.Thread(target=functionA,args=(n, ))
thread.start()
a_threads.append(thread)
for thread in a_threads:
thread.join()
thread = threading.Thread(target=functionB,args=(m, ))
thread.start()
thread.join()
To allow the A threads to start before the B thread has finished that one shifts the join around.
secondList = []
b_thread = None
for k in range(20):
firstList = []
a_threads = []
for n in range(0, 200):
thread = threading.Thread(target=functionA,args=(n, ))
thread.start()
a_threads.append(thread)
for thread in a_threads:
thread.join()
if b_thread is not None:
b_thread.join()
thread = threading.Thread(target=functionB,args=(m, ))
thread.start()
b_thread = thread
if b_thread is not None:
b_thread.join()

Why does multiprocess yield different timing because order of execution

I am running the following benchmark script on Windows machine. I noticed the order when multiprocess() get executed affected it's performance. If I execute multiprocess first, the execution speed is faster than simple & multithread() method, if I executed it in the end, the processing speed is almost double compared to multithread() and simple method.
import random
from threading import Thread
from multiprocessing import Process
import time
size = 10000000 # Number of random numbers to add to list
threads = 8 # Number of threads to create
my_list = []
for i in range(0,threads):
my_list.append([])
def func(count, mylist):
for i in range(count):
mylist.append(random.random())
processes = []
for i in range(0, threads):
p = Process(target=func,args=(size,my_list[i]))
processes.append(p)
def multithreaded():
jobs = []
for i in range(0, threads):
thread = Thread(target=func,args=(size,my_list[i]))
jobs.append(thread)
# Start the threads
for j in jobs:
j.start()
# Ensure all of the threads have finished
for j in jobs:
j.join()
def simple():
for i in range(0, threads):
func(size,my_list[i])
def multiprocessed():
global processes
# Start the processes
for p in processes:
p.start()
# Ensure all processes have finished execution
for p in processes:
p.join()
if __name__ == "__main__":
start = time.time()
multiprocessed()
print("elasped time:{}".format(time.time()-start))
start = time.time()
simple()
print("elasped time:{}".format(time.time()-start))
start = time.time()
multithreaded()
print("elasped time:{}".format(time.time()-start))
Results #1 : multiprocessed (2.85s) -> simple (7.39s) -> multithread
(7.84s)
Results #2 : multithread (7.84s) -> simple (7.53s) ->
multiprocessed (13.96 s)
Why is that ? How do I properly use multiprocess function on windows in order to improve the speed by utilizing CPU cores
Your timing code doesn't isolate each test from the effects of the others. If you execute multiprocessed first, the sublists of my_list are empty. If you execute it last, the sublists are full of elements added by the other runs, dramatically increasing the communication overhead involved in sending the data to the worker processes.

Why do python Processes behave differently with barrier.wait()?

TLDR
Selecting the process that gets 0 back from barrier.wait() biases the order of the processes on the next iteration. Why?
Full Story
The below problem is a simple version of my actual problem.
Lets say I have the following multiprocessing problem:
I have a string (named output), n processes, n characters (not spaces), and N iterations.
Example: output, n, characters, N = '', 3, ['a', 'b', 'c'], 4
For each iteration, every process should append its character to the string (in any order). After each iteration, a space character should be appended to the string.
For the example, the output could be:
output = 'bca abc bac cab'
To get this functionality I use the multiprocessing library.
from multiprocessing import Process, Lock, Value, Manager, Barrier
import ctypes
def print_characters(lock, barrier, character, N, output):
for i in range(N):
lock.acquire()
output.value += character
lock.release()
_id = barrier.wait() # get an id for each thread (although not promised to be assigned randomly)
if _id == 0: # select 1 and get this thread to append the ' '
lock.acquire()
output.value += ' '
lock.release()
barrier.wait()
if __name__ == '__main__':
manager = Manager()
output = manager.Value(ctypes.c_char_p, "")
num_processes = 3
characters = 'abc'
N = 4
lock, barrier = Lock(), Barrier(num_processes)
processes = []
for i in range(num_processes):
p = Process(target=print_characters, args=(lock, barrier, characters[i], N, output))
p.start()
processes.append(p)
for p in processes:
p.join()
print(output.value)
Cool....
According to the documentation for Barrier.wait(),
When all the [processes] to the barrier have called this function, they are all released simultaneously.
So, if all n processes are released simultaneously, I wondered if this would cause a kind of "uniform random race condition" on the processes such that each process had an equal chance of getting to the lock.acquire() on the next iteration.
I don't really care about the order but it might be nice to know if I have the processes doing something different in the future.
To see this if this was the case I counted the number of times that each character came in which position:
output = output.value.split(' ')[:-1]
c = [[0]*num_processes for i in range(num_processes)]
for line in output:
for i, character in enumerate(line):
ind = characters.find(character)
c[ind][i] += 1
print(c)
For N = 1000, this is one output I got:
[[1000 0 0]
[ 0 948 52]
[ 0 52 948]]
:| ok?
So out of 1000 iterations, the process with the first character was always first. That didn't seem very random.
I then made one change in the print_characters function:
if _id == 1:
And I got:
[[254 254 492]
[489 489 22]
[257 257 486]]
Not uniform random but also not as biased.
Changing the number of processes (assuming the cpu can support it) and thus the number of characters shows this same effect:
Selecting the process that gets 0 back from barrier.wait() biases the order of the processes on the next iteration.
Why?

Can Python threads work on the same process?

I am trying to come up with a way to have threads work on the same goal without interfering. In this case I am using 4 threads to add up every number between 0 and 90,000. This code runs but it ends almost immediately (Runtime: 0.00399994850159 sec) and only outputs 0. Originally I wanted to do it with a global variable but I was worried about the threads interfering with each other (ie. the small chance that two threads double count or skip a number due to strange timing of the reads/writes). So instead I distributed the workload beforehand. If there is a better way to do this please share. This is my simple way of trying to get some experience into multi threading. Thanks
import threading
import time
start_time = time.time()
tot1 = 0
tot2 = 0
tot3 = 0
tot4 = 0
def Func(x,y,tot):
tot = 0
i = y-x
while z in range(0,i):
tot = tot + i + z
# class Tester(threading.Thread):
# def run(self):
# print(n)
w = threading.Thread(target=Func, args=(0,22499,tot1))
x = threading.Thread(target=Func, args=(22500,44999,tot2))
y = threading.Thread(target=Func, args=(45000,67499,tot3))
z = threading.Thread(target=Func, args=(67500,89999,tot4))
w.start()
x.start()
y.start()
z.start()
w.join()
x.join()
y.join()
z.join()
# while (w.isAlive() == False | x.isAlive() == False | y.isAlive() == False | z.isAlive() == False): {}
total = tot1 + tot2 + tot3 + tot4
print total
print("--- %s seconds ---" % (time.time() - start_time))
You have a bug that makes this program end almost immediately. Look at while z in range(0,i): in Func. z isn't defined in the function and its only by luck (bad luck really) that you happen to have a global variable z = threading.Thread(target=Func, args=(67500,89999,tot4)) that masks the problem. You are testing whether the thread object is in a list of integers... and its not!
The next problem is with the global variables. First, you are absolutely right that using a single global variable is not thread safe. The threads would mess with each others calculations. But you misunderstand how globals work. When you do threading.Thread(target=Func, args=(67500,89999,tot4)), python passes the object currently referenced by tot4 to the function, but the function has no idea which global it came from. You only update the local variable tot and discard it when the function completes.
A solution is to use a global container to hold the calculations as shown in the example below. Unfortunately, this is actually slower than just doing all the work in one thread. The python global interpreter lock (GIL) only lets 1 thread run at a time and only slows down CPU-intensive tasks implemented in pure python.
You could look at the multiprocessing module to split this into multiple processes. That works well if the cost of running the calculation is large compared to the cost of starting the process and passing it data.
Here is a working copy of your example:
import threading
import time
start_time = time.time()
tot = [0] * 4
def Func(x,y,tot_index):
my_total = 0
i = y-x
for z in range(0,i):
my_total = my_total + i + z
tot[tot_index] = my_total
# class Tester(threading.Thread):
# def run(self):
# print(n)
w = threading.Thread(target=Func, args=(0,22499,0))
x = threading.Thread(target=Func, args=(22500,44999,1))
y = threading.Thread(target=Func, args=(45000,67499,2))
z = threading.Thread(target=Func, args=(67500,89999,3))
w.start()
x.start()
y.start()
z.start()
w.join()
x.join()
y.join()
z.join()
# while (w.isAlive() == False | x.isAlive() == False | y.isAlive() == False | z.isAlive() == False): {}
total = sum(tot)
print total
print("--- %s seconds ---" % (time.time() - start_time))
You can pass in a mutable object that you can add your results either with an identifier, e.g. dict or just a list and append() the results, e.g.:
import threading
def Func(start, stop, results):
results.append(sum(range(start, stop+1)))
rngs = [(0, 22499), (22500, 44999), (45000, 67499), (67500, 89999)]
results = []
jobs = [threading.Thread(target=Func, args=(start, stop, results)) for start, stop in rngs]
for j in jobs:
j.start()
for j in jobs:
j.join()
print(sum(results))
# 4049955000
# 100 loops, best of 3: 2.35 ms per loop
As others have noted you could look multiprocessing in order to split the work to multiple different processes that can run parallel. This would benefit especially in CPU-intensive tasks assuming that there isn't huge amount of data to pass between the processes.
Here's a simple implementation of the same functionality using multiprocessing:
from multiprocessing import Pool
POOL_SIZE = 4
NUMBERS = 90000
def func(_range):
tot = 0
for z in range(*_range):
tot += z
return tot
with Pool(POOL_SIZE) as pool:
chunk_size = int(NUMBERS / POOL_SIZE)
chunks = ((i, i + chunk_size) for i in range(0, NUMBERS, chunk_size))
print(sum(pool.imap(func, chunks)))
In above chunks is a generator that produces the same ranges that were hardcoded in original version. It's given to imap which works the same as standard map except that it executes the function in the processes within the pool.
Less known fact about multiprocessing is that you can easily convert the code to use threads instead of processes by using undocumented multiprocessing.pool.ThreadPool. In order to convert above example to use threads just change import to:
from multiprocessing.pool import ThreadPool as Pool

Categories

Resources