Multiprocessing in python won't releae memory - python

I am running a multiprocessing code. The framework of the code is something like below:
def func_a(x):
#main function here
return result
def func_b(y):
cores = multiprocessing.cpu_count() - 1
pool = multiprocessing.Pool(processes=cores)
results = pool.map(func_a, np.arange(1000)
return results
if __name__ == '__main__':
final_resu = []
for i in range(0, 200):
final_resu.append(func_b(i))
The problem I found in this code has two problems: Firstly, the memory continues going up during the loop. Secondly, in the task manager (windows10), the number of python executions increased step-wise, i.e. 14 to 25, to 36, to 47... with every iteration finished in the main loop.
I believe it has something wrong with the multiprocessing, but I'm not sure how to deal with it. It looks like the multiprocessing in func_b is not deleted when the main loop finished one loop?

As the examples in the docs show, when you're done with a Pool you should shut it down explicitly, via pool.close() followed by pool.join().That said, it would be better still if, in addition, you created your Pool only once - e.g., pass a Pool as an argument to func_b(). and create it - and close it down - only once, in the __name__ == '__main__' block.

Related

Why does comparing two images take longer when running the procedure in parallel using python's Pool module?

I'm developing a program that involves computing similarity scores for around 480 pairs of images (20 directories with around 24 images in each). I'm utilizing the sentence_transformers Python module for image comparison, and it takes around 0.1 - 0.2 seconds on my Windows 11 machine to compare two images when running in serial, but for some reason, that time gets increased to between 1.5 and 3.0 seconds when running in parallel using a process Pool. So, either a), there's something going on behind the scenes that I'm not yet aware of, or b) I just did it wrong.
Here's a rough structure of the image comparison function:
def compare_images(image_one, image_two, clip_model):
start = time()
images = [image_one, image_two]
# clip_model is set to SentenceTransformer('clip-ViT-B-32') elsewhere in the code
encoded_images = clip_model.encode(images, batch_size = 2, convert_to_tensor = True, show_progress_bar = False)
processed_images = util.paraphrase_mining_embeddings(encoded_images)
stop = time()
print("Comparison time: %f" % (stop - start) )
score, image_id1, image_id2 = processed_images[0]
return score
Here's a rough structure of the serial version of the code to compare every image:
def compare_all_images(candidate_image, directory, clip_model):
for dir_entry in os.scandir(directory):
dir_image_path = dir_entry.path
dir_image = Image.open(dir_image_path)
similiarity_score = compare_images(candidate_image, dir_image, clip_model)
# ... code to determine whether this is the maximum score the program has seen...
Here is a rough structure of the parallel version:
def compare_all_images(candidate_image, directory, clip_model):
pool_results = dict()
pool = Pool()
for dir_entry in os.scandir(directory):
dir_image_path = dir_entry.path
dir_image = Image.open(dir_image_path)
pool_results[dir_image_path] = pool.apply_async(compare_images, args = (candidate_image, dir_image, clip_model)
# Added everything to the pool, close it and wait for everything to finish
pool.close()
pool.join()
# ... remaining code to determine which image has the highest similarity rating
I'm not sure where I might be erring.
The interesting thing here is that I also developed a smaller program to verify whether I was doing things correctly:
def func():
sleep(6)
def main():
pool = Pool()
for i in range(20):
pool.apply_async(func)
pool.close()
start = time()
pool.join()
stop = time()
print("Time: %f" % (stop - start) ) # This gave an average of 12 seconds
# across multiple runs on my Windows 11
# machine, on which multiprocessing.cpu_count=12
Is this a problem with trying to make things parallel with sentence transformers, or does the problem lie elsewhere?
UPDATE: Now I'm especially confused. I'm now only passing str objects to the comparison function and have temporarily slapped a return 0 as the very first line in the function to see if I can further isolate the issue. Oddly, even though the parallel function is doing absolutely nothing now, several seconds (usually around 5) still seem to pass between the time that the pool is closed and the time that pool.join() finishes. Any thoughts?
UPDATE 2: I've done some more playing around, and have found out that an empty pool still has some overhead. This is the code I'm testing out currently:
# ...
pool = Pool()
pool.close()
start = time()
DebuggingUtilities.debug("empty pool closed, doing a join on the empty pool to see if directory traversal is messing things up")
pool.join()
stop = time()
DebuggingUtilities.debug("Empty pool join time: %f" % (stop - start) )
This gives me an "Empty pool join time" of about 5 seconds. Moving this snippet to the very first part of my main function still yields the same. Perhaps Pool works differently on Windows? In WSL (Ubuntu 20.04), the same code runs in about 0.02 seconds. So, what would cause even an empty Pool to hang for such a long time on Windows?
UPDATE 3: I've made another discovery. The empty pool problem goes away if the only imports I have are from multiprocessing import Pool and from time import time. However, the program uses a boatload of import statements across several source files, which causes the program to hang a bit when it first starts. I suspect that this is propagating down into the Pool for some reason. Unfortunately, I need all of the import statements that are in the source files, so I'm not sure how to get around this (or why the imports would affect an empty Pool).
UPDATE 4: So, apparently it's the from sentence_transformers import SentenceTransformer line that's causing issues (without that import, the pool.join() call happens relatively quickly. I think the easiest solution now is to simply move the compare_images function into a separate file. I'll update this question again with updates as I implement this.
UPDATE 5: I've done a little more playing around, and it seems like on Windows, the import statements get executed multiple times whenever a Pool gets created, which I think is just weird. Here's the code I used to verify this:
from multiprocessing import Pool
from datetime import datetime
from time import time
from utils import test
print("outside function lol")
def get_time():
now = datetime.now()
return "%02d/%02d/%04d - %02d:%02d:%02d" % (now.month, now.day, now.year, now.hour, now.minute, now.second)
def main():
pool = Pool()
print("Starting pool")
"""
for i in range(4):
print("applying %d to pool %s" % (i, get_time() ) )
pool.apply_async(test, args = (i, ) )
"""
pool.close()
print("Pool closed, waiting for all processes to finish")
start = time()
pool.join()
stop = time()
print("pool done: %f" % (stop - start) )
if __name__ == "__main__":
main()
Running through Windows command prompt:
outside function lol
Starting pool
Pool closed, waiting for all processes to finish
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
outside function lol
pool done: 4.794051
Running through WSL:
outside function lol
Starting pool
Pool closed, waiting for all processes to finish
pool done: 0.048856
UPDATE 6: I think I might have a workaround, which is to create the Pool in a file that doesn't directly or indirectly import anything from sentence_transformers. I then pass the model and anything else I need from sentence_transformers as parameters to a function that handles the Pool and kicks off all of the parallel processes. Since the sentence_transformers import seems to be the only problematic one, I'll wrap that import statement in an if __name__ == "__main__" so it only runs once, which will be fine, as I'm passing the things I need from it as parameters. It's a rather janky solution, and probably not what others would consider as "Pythonic", but I have a feeling this will work.
UPDATE 7: The workaround was successful. I've managed to get the pool join time on an empty pool down to something reasonable (0.2 - 0.4 seconds). The downside of this approach is that there is definitely considerable overhead in passing the entire model as a parameter to the parallel function, which I needed to do as a result of creating the Pool in a different place than the model was being imported. I'm quite close, though.
I've done a little more digging, and think I've finally discovered the root of the problem, and it has everything to do with what's described here.
To summarize, on Linux systems, processes are forked from the main process, meaning that the current process state is copied (which is why the import statements don't run multiple times). On Windows (and macOS), processes are spawned, meaning that interpreter starts at the beginning of the "main" file, thus running all import statements again. So, the behavior I'm seeing is not a bug, but I will need to rethink my program design to account for this.

Process Pool Executor runs code outside of scope

I'm trying to run a bunch of processes in parallel with the Process Pool Executor from concurrent futures in Python.
The processes are all running in parallel in a while loop which is great, but for some reason the code outside of the main method repeatedly runs. I saw another answer say to use the name == main check to fix but it still doesn't work.
Any ideas how I can just get the code inside the main method to run? My object keeps getting reset repeatedly.
EDIT: I ran my code using ThreadPoolExecutor instead and it fixed the problem, although I'm still curious about this.
import concurrent.futures
import time
from myFile import myObject
obj = myObject()
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
while condition:
for index in range(0,10):
executor.submit(obj.function, index, index+1)
executor.submit(obj.function2)
time.sleep(5)
print("test")
if __name__ == "__main__":
main()

Python multiprocessing map using with statement does not stop

I am using multiprocessing python module to run parallel and unrelated jobs with a function similar to the following example:
import numpy as np
from multiprocessing import Pool
def myFunction(arg1):
name = "file_%s.npy"%arg1
A = np.load(arg1)
A[A<0] = np.nan
np.save(arg1,A)
if(__name__ == "__main__"):
N = list(range(50))
with Pool(4) as p:
p.map_async(myFunction, N)
p.close() # I tried with and without that statement
p.join() # I tried with and without that statement
DoOtherStuff()
My problem is that the function DoOtherStuff is never executed, the processes switches into sleep mode on top and I need to kill it with ctrl+C to stop it.
Any suggestions?
You have at least a couple problems. First, you are using map_async() which does not block until the results of the task are completed. So what you're doing is starting the task with map_async(), but then immediately closes and terminates the pool (the with statement calls Pool.terminate() upon exiting).
When you add tasks to a Process pool with methods like map_async it adds tasks to a task queue which is handled by a worker thread which takes tasks off that queue and farms them out to worker processes, possibly spawning new processes as needed (actually there is a separate thread which handles that).
Point being, you have a race condition where you're terminating the Pool likely before any tasks are even started. If you want your script to block until all the tasks are done just use map() instead of map_async(). For example, I rewrote your script like this:
import numpy as np
from multiprocessing import Pool
def myFunction(N):
A = np.load(f'file_{N:02}.npy')
A[A<0] = np.nan
np.save(f'file2_{N:02}.npy', A)
def DoOtherStuff():
print('done')
if __name__ == "__main__":
N = range(50)
with Pool(4) as p:
p.map(myFunction, N)
DoOtherStuff()
I don't know what your use case is exactly, but if you do want to use map_async(), so that this task can run in the background while you do other stuff, you have to leave the Pool open, and manage the AsyncResult object returned by map_async():
result = pool.map_async(myFunction, N)
DoOtherStuff()
# Is my map done yet? If not, we should still block until
# it finishes before ending the process
result.wait()
pool.close()
pool.join()
You can see more examples in the linked documentation.
I don't know why in your attempt you got a deadlock--I was not able to reproduce that. It's possible there was a bug at some point that was then fixed, though you were also possibly invoking undefined behavior with your race condition, as well as calling terminate() on a pool after it's already been join()ed. As for your why your answer did anything at all, it's possible that with the multiple calls to apply_async() you managed to skirt around the race condition somewhat, but this is not at all guaranteed to work.

Python: multithreading in infinite loop

I have a code which is basically running an infinite loop, and in each iteration of the loop I run some instructions. Some of these instructions have to run in "parallel", which I do by using multiprocessing. Here is an example of my code structure:
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
def buy_fruit(fruit, number):
print('I bought '+str(number)+' times the following fruit:'+fruit)
return 'ok'
def func1(parameter1, parameter2):
myParameters=(parameter1,parameter2)
pool= Threadpool(2)
data = pool.starmap(func2,zip(myParameters))
return 'ok'
def func2(parameter1):
print(parameter1)
return 'ok'
while true:
myFruits=('apple','pear','orange')
myQuantities=(5,10,2)
pool= Threadpool(2)
data = pool.starmap(buy_fruit,zip(myFruits,myQuantities))
func1('hello', 'hola')
I agree it's a bit messy, because I have multi-processes within the main loop, but also within functions.
So everything works well, until the loop runs a few minutes and I get an error:
"RuntimeError: can't start new thread"
I saw online that this is due to the fact that I have opened too many threads.
What is the simplest way to close all my Threads by the end of each loop iteration, so I can restart "fresh" at the start of the new loop iteration?
Thank you in advance for your time and help!
Best,
Julia
PS: The example code is just an example, my real function opens many threads in each loop and each function takes a few seconds to execute.
You are creating a new ThreadPool object inside the endless loop, which is a likely cause to your problem, because you are not terminating the threads at the end of the loop. Have you tried creating the object outside of the endless loop?
pool = ThreadPool(2)
while True:
myFruits = ('apple','pear','orange')
myQuantities = (5,10,2)
data = pool.starmap(buy_fruit, zip(myFruits,myQuantities))
Alternatively, and to answer your question, if your use case for some reason requires creating a new ThreadPool Object in each loop iteration, use a ContextManager (with Notation) to make sure all threads are closed upon leaving the ContextManager.
while True:
myFruits = ('apple','pear','orange')
myQuantities = (5,10,2)
with ThreadPool(2) as pool:
data = pool.starmap(buy_fruit, zip(myFruits,myQuantities))
Notice however the noticable performance difference this has compared to the above code. Creating and terminating Threads is expensive, which is why the example above will run much faster, and is probably what you'll want to use.
Regarding your edit involving "nested ThreadPools": I would suggest to maintain one single instance of your ThreadPool, and pass references to your nested functions as required.
def func1(pool, parameter1, parameter2):
...
...
pool = ThreadPool(2)
while True:
myFruits=('apple','pear','orange')
myQuantities=(5,10,2)
data = pool.starmap(buy_fruit, zip(myFruits,myQuantities))
func1(pool, 'hello', 'hola')

Multiprocessing with python3 only runs once

I have a problem running multiple processes in python3 .
My program does the following:
1. Takes entries from an sqllite database and passes them to an input_queue
2. Create multiple processes that take items off the input_queue, run it through a function and output the result to the output queue.
3. Create a thread that takes items off the output_queue and prints them (This thread is obviously started before the first 2 steps)
My problem is that currently the 'function' in step 2 is only run as many times as the number of processes set, so for example if you set the number of processes to 8, it only runs 8 times then stops. I assumed it would keep running until it took all items off the input_queue.
Do I need to rewrite the function that takes the entries out of the database (step 1) into another process and then pass its output queue as an input queue for step 2?
Edit:
Here is an example of the code, I used a list of numbers as a substitute for the database entries as it still performs the same way. I have 300 items on the list and I would like it to process all 300 items, but at the moment it just processes 10 (the number of processes I have assigned)
#!/usr/bin/python3
from multiprocessing import Process,Queue
import multiprocessing
from threading import Thread
## This is the class that would be passed to the multi_processing function
class Processor:
def __init__(self,out_queue):
self.out_queue = out_queue
def __call__(self,in_queue):
data_entry = in_queue.get()
result = data_entry*2
self.out_queue.put(result)
#Performs the multiprocessing
def perform_distributed_processing(dbList,threads,processor_factory,output_queue):
input_queue = Queue()
# Create the Data processors.
for i in range(threads):
processor = processor_factory(output_queue)
data_proc = Process(target = processor,
args = (input_queue,))
data_proc.start()
# Push entries to the queue.
for entry in dbList:
input_queue.put(entry)
# Push stop markers to the queue, one for each thread.
for i in range(threads):
input_queue.put(None)
data_proc.join()
output_queue.put(None)
if __name__ == '__main__':
output_results = Queue()
def output_results_reader(queue):
while True:
item = queue.get()
if item is None:
break
print(item)
# Establish results collecting thread.
results_process = Thread(target = output_results_reader,args = (output_results,))
results_process.start()
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
# Perform multi processing
perform_distributed_processing(dbList,10,Processor,output_results)
# Wait for it all to finish.
results_process.join()
A collection of processes that service an input queue and write to an output queue is pretty much the definition of a process pool.
If you want to know how to build one from scratch, the best way to learn is to look at the source code for multiprocessing.Pool, which is pretty simply Python, and very nicely written. But, as you might expect, you can just use multiprocessing.Pool instead of re-implementing it. The examples in the docs are very nice.
But really, you could make this even simpler by using an executor instead of a pool. It's hard to explain the difference (again, read the docs for both modules), but basically, a future is a "smart" result object, which means instead of a pool with a variety of different ways to run jobs and get results, you just need a dumb thing that doesn't know how to do anything but return futures. (Of course in the most trivial cases, the code looks almost identical either way…)
from concurrent.futures import ProcessPoolExecutor
def Processor(data_entry):
return data_entry*2
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
yield from executor.map(processor_factory, dbList)
if __name__ == '__main__':
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
for result in perform_distributed_processing(dbList, 8, Processor):
print(result)
Or, if you want to handle them as they come instead of in order:
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
fs = (executor.submit(processor_factory, db) for db in dbList)
yield from map(Future.result, as_completed(fs))
Notice that I also replaced your in-process queue and thread, because it wasn't doing anything but providing a way to interleave "wait for the next result" and "process the most recent result", and yield (or yield from, in this case) does that without all the complexity, overhead, and potential for getting things wrong.
Don't try to rewrite the whole multiprocessing library again. I think you can use any of multiprocessing.Pool methods depending on your needs - if this is a batch job you can even use the synchronous multiprocessing.Pool.map() - only instead of pushing to input queue, you need to write a generator that yields input to the threads.

Categories

Resources