Queue (multiprocessing) where you can get each value twice - python

Is there any option to have a multiprocessing Queue where each value can be accessed twice?
My problem is I have one "Generator process" creating a constant flux of data and would like to access this in two different process each doing it's thing with the data.
A minimal "example" of the issue.
import multiprocessing as mp
import numpy as np
class Process1(mp.Process):
def __init__(self,Data_Queue):
mp.Process.__init__(self)
self.Data_Queue = Data_Queue
def run(self):
while True:
self.Data_Queue.get()
# Do stuff with
self.Data_Queue.task_done()
class Process2(mp.Process):
def __init__(self,Data_Queue):
mp.Process.__init__(self)
self.Data_Queue = Data_Queue
def run(self):
while True:
self.Data_Queue.get()
# Do stuff with
self.Data_Queue.task_done()
if __name__ == "__main__":
data_Queue = mp.Queue()
P1 = Process1()
P1.start()
P2 = Process2()
P2.start()
while True: # Generate data
data_Queue.put(np.random.rand(1000))
The idea is that I would like for both Process1 and Process2 to access all generated data in this example. What would happen is that each one would only just get some random portions of it this way.
Thanks for the help!
Update 1: As pointed in some of the questions and answers this becomes a little more complicated for two reasons I did not include in the initial question.
The data is externally generated on a non constant schedule (I may receive tons of data for a few seconds than wait minutes for more to come)
As such, data may arrive faster than it's possible to process so it would need to be "Queued" in a way while it waits for its turn to be processed.

One way to solve your problem is, first, to use multiprocessing.Array to share, let's say, a numpy array with your data between worker processes. Second, use a multiprocessing.Barrier to synchronize the main process and the workers when generating and processing data batches. And, finally, provide each process worker with its own queue to signal them when the next data batch is ready for processing. Below is the complete working example just to show you the idea:
#!/usr/bin/env python3
import os
import time
import ctypes
import multiprocessing as mp
import numpy as np
WORKERS = 5
DATA_SIZE = 10
DATA_BATCHES = 10
def process_data(data, queue, barrier):
proc = os.getpid()
print(f'[WORKER: {proc}] Started')
while True:
data_batch = queue.get()
if data_batch is None:
break
arr = np.frombuffer(data.get_obj())
print(f'[WORKER: {proc}] Started processing data {arr}')
time.sleep(np.random.randint(0, 2))
print(f'[WORKER: {proc}] Finished processing data {arr}')
barrier.wait()
print(f'[WORKER: {proc}] Finished')
def generate_data_array(i):
print(f'[DATA BATCH: {i}] Start generating data... ', end='')
time.sleep(np.random.randint(0, 2))
data = np.random.randint(0, 10, size=DATA_SIZE)
print(f'Done! {data}')
return data
if __name__ == '__main__':
data = mp.Array(ctypes.c_double, DATA_SIZE)
data_barrier = mp.Barrier(WORKERS + 1)
workers = []
# Start workers:
for _ in range(WORKERS):
data_queue = mp.Queue()
p = mp.Process(target=process_data, args=(data, data_queue, data_barrier))
p.start()
workers.append((p, data_queue))
# Generate data batches in the main process:
for i in range(DATA_BATCHES):
arr = generate_data_array(i + 1)
data_arr = np.frombuffer(data.get_obj())
np.copyto(data_arr, arr)
for _, data_queue in workers:
# Signal workers that the new data batch is ready:
data_queue.put(True)
data_barrier.wait()
# Stop workers:
for worker, data_queue in workers:
data_queue.put(None)
worker.join()
Here, you start with the definition of the shared data array data and the barrier data_barrier used for the process synchronization. Then, in the loop, you instantiate a queue data_queue, create and start a worker process p passing the shared data array, the queue instance, and the shared barrier instance data_barrier as its parameters. Once the workers have been started, you generate data batches in the loop, copy generated numpy arrays into shared data array, and signal processes via their queues that the next data batch is ready for processing. Then, you wait on barrier when all the worker processes have finished their work before generate the next data batch. In the end, you send None signal to all the processes in order to make them quit the infinite processing loop.

Related

Python multiprocessing write to file with starmap_async()

I'm currently setting up a automated simulation pipeline for OpenFOAM (CFD library) using the PyFoam library within Python to create a large database for machine learning purposes. The database will have around 500k distinct simulations. To run this pipeline on multiple machines, I'm using the multiprocessing.Pool.starmap_async(args) option which will continually start a new simulation once the old simulation has completed.
However, since some of the simulations might / will crash, I want to generate a textfile with all cases which have crashed.
I've already found this thread which implements the multiprocessing.Manager.Queue() and adds a listener but I failed to get it running with starmap_async(). For my testing I'm trying to print the case name for any simulation which has been completed but currently only one entry is written into the text file instead of all of them (the simulations all complete successfully).
So my question is how can I write a message to a file for each simulation which has completed.
The current code layout looks roughly like this - only important snipped has been added as the remaining code can't be run without OpenFOAM and additional customs scripts which were created for the automation.
Any help is highly appreciated! :)
from PyFoam.Execution.BasicRunner import BasicRunner
from PyFoam.Execution.ParallelExecution import LAMMachine
import numpy as np
import multiprocessing
import itertools
import psutil
# Defining global variables
manager = multiprocessing.Manager()
queue = manager.Queue()
def runCase(airfoil, angle, velocity):
# define simulation name
newCase = str(airfoil) + "_" + str(angle) + "_" + str(velocity)
'''
A lot of pre-processing commands to prepare the simulation
which has been removed from snipped such as generate geometry, create mesh etc...
'''
# run simulation
machine = LAMMachine(nr=4) # set number of cores for parallel execution
simulation = BasicRunner(argv=[solver, "-case", case.name], silent=True, lam=machine, logname="solver")
simulation.start() # start simulation
# check if simulation has completed
if simulation.runOK():
# write message into queue
queue.put(newCase)
if not simulation.runOK():
print("Simulation did not run successfully")
def listener(queue):
fname = 'errors.txt'
msg = queue.get()
while True:
with open(fname, 'w') as f:
if msg == 'complete':
break
f.write(str(msg) + '\n')
def main():
# Create parameter list
angles = np.arange(-5, 0, 1)
machs = np.array([0.15])
nacas = ['0012']
paramlist = list(itertools.product(nacas, angles, np.round(machs, 9)))
# create number of processes and keep 2 cores idle for other processes
nCores = psutil.cpu_count(logical=False) - 2
nProc = 4
nProcs = int(nCores / nProc)
with multiprocessing.Pool(processes=nProcs) as pool:
pool.apply_async(listener, (queue,)) # start the listener
pool.starmap_async(runCase, paramlist).get() # run parallel simulations
queue.put('complete')
pool.close()
pool.join()
if __name__ == '__main__':
main()
First, when your with multiprocessing.Pool(processes=nProcs) as pool: exits, there will be an implicit call to pool.terminate(), which will kill all pool processes and with it any running or queued up tasks. There is no point in calling queue.put('complete') since nobody is listening.
Second, your 'listener" task gets only a single message from the queue. If is "complete", it terminates immediately. If it is something else, it just loops continuously writing the same message to the output file. This cannot be right, can it? Did you forget an additional call to queue.get() in your loop?
Third, I do not quite follow your computation for nProcs. Why the division by 4? If you had 5 physical processors nProcs would be computed as 0. Do you mean something like:
nProcs = psutil.cpu_count(logical=False) // 4
if nProcs == 0:
nProcs = 1
elif nProcs > 1:
nProcs -= 1 # Leave a core free
Fourth, why do you need a separate "listener" task? Have your runCase task return whatever message is appropriate according to how it completes back to the main process. In the code below, multiprocessing.pool.Pool.imap is used so that results can be processed as the tasks complete and results returned:
from PyFoam.Execution.BasicRunner import BasicRunner
from PyFoam.Execution.ParallelExecution import LAMMachine
import numpy as np
import multiprocessing
import itertools
import psutil
def runCase(tpl):
# Unpack tuple:
airfoil, angle, velocity = tpl
# define simulation name
newCase = str(airfoil) + "_" + str(angle) + "_" + str(velocity)
... # Code omitted for brevity
# check if simulation has completed
if simulation.runOK():
return '' # No error
# Simulation did not run successfully:
return f"Simulation {newcase} did not run successfully"
def main():
# Create parameter list
angles = np.arange(-5, 0, 1)
machs = np.array([0.15])
nacas = ['0012']
# There is no reason to convert this into a list; it
# can be lazilly computed:
paramlist = itertools.product(nacas, angles, np.round(machs, 9))
# create number of processes and keep 1 core idle for main process
nCores = psutil.cpu_count(logical=False) - 1
nProc = 4
nProcs = int(nCores / nProc)
with multiprocessing.Pool(processes=nProcs) as pool:
with open('errors.txt', 'w') as f:
# Process message results as soon as the task ends.
# Use method imap_unordered if you do not care about the order
# of the messages in the output.
# We can only pass a single argument using imap, so make it a tuple:
for msg in pool.imap(runCase, zip(paramlist)):
if msg != '': # Error completion
print(msg)
print(msg, file=f)
pool.join() # Not really necessary here
if __name__ == '__main__':
main()

Python Multiprocessing cannot Join for Large Data Set

I am trying compute cosine similarity of 260774x96 data matrix. I was using full size Dataframe to compute the cosine similarity, and using this code.
similarities = cdist(Dataframe, Dataframe, metric='cosine').
However, the Jupyter Notebook run out memory(64GB) and crash the kernel. So that I decide to compute single row with Dataframe each time, and then sort, trim and save it each loop. The issue is it takes too much times, about 12 hours.
So I applied multiprocessing to speed up, for small size data it works, single row compute with 500 rows Dataframe. However, computing single row with full size Dataframe, the process can work and put data to the queue, but the join() just not work. The main function keep running even the child process was end with no error. I try to print the queue in another cell, it has full data.
Anther problem is input data queue cannot take full size Dataframe, it show Full error.
I don't know what is the problem the program just stuck on join() step. Or is other practical way to have parallel computing.
from multiprocess import Lock, Process, Queue, current_process,Array,Lock
from time import time
import queue # imported for using queue.Empty exception
from scipy.spatial.distance import cdist
from scipy.spatial import distance
# from cos_process import cos_process
from sklearn.metrics import pairwise_distances
def cos_process(idx, result_queue, index_queue,df_data_only_numpy):
print(f"process start {current_process().name}")
# for shrik the data size, so that the multiprocessing Join can work.
# df_data_only_numpy = df_data_only_numpy[:500]
while True:
try:
task_index_list = index_queue.get_nowait()
except queue.Empty:
break
else:
cosine_dict = {}
for i in task_index_list:
similarities = cdist([df_data_only_numpy[i]], df_data_only_numpy, metric='cosine')
sorted_save_data = np.sort(similarities[0])[:20] # output the cosine simlarities data after sorted
sorted_save_key = (np.argsort(similarities[0])[:20]) #output the index of sorted simlarities
# make a dictionary {index:{index:cosine,index:cosine},....}
cosine_dict[i]={int(i):float(data) if data==data else (data) for i,data in zip(sorted_save_key,sorted_save_data) }
# print(cosine_dict)
try:
# put the dictionary data to queue
result_queue.put_nowait(cosine_dict)
except queue.Full:
print("queue full in process")
print(f"end process {current_process().name}")
return True
def main():
number_of_processes = 10 # create 10 processes for computer
process_compute_time = 1000 # each process run 1000 rows in the while loop
df_data_only_numpy = df_data_only.to_numpy()
index_queue = Queue() # index of the df
result_queue = Queue() # store the output of the similarity
processes = []
# for idx in df_data_only.T:
# only test 1000 rows data in this case, the dataframe has 260774 rows and 96 columns
for idx in range(0, 1000, process_compute_time):
try:
index_queue.put_nowait(list(range(idx, idx+process_compute_time)))
except queue.Full:
print("full")
print("creating process")
# creating processes
for w in range(number_of_processes):
p = Process(target=cos_process, args=(index_queue, result_queue, index_queue,df_data_only_numpy))
processes.append(p)
p.start()
# completing process
for p in processes:
print("before join")
p.join()
print("finish join")
# print the output
while not result_queue.empty():
print(result_queue.get())
return True
if __name__ == '__main__':
main()

Get thread index from a multiprocessing.pool.ThreadPool in Python

I'm using a library whose multiprocessing is implemented using multiprocessing.pool.ThreadPool(processes).
How is it possible to get/compute the thread index from the pool (starting from 0 to processes-1) ?
I've been reading the documentation and searching the web without finding convincing solution. I can get the thread ID (using threading.get_ident()), and could go through all the threads to construct a mapping between their index and their ID but I would need to use some kind of time.sleep() to ensure I browse them all... Do you think of any better solution?
The idea is to create a worker function, called test_worker in the example below, that returns its thread identity and the argument it is called with, which takes on values of 0 ... pool size - 1. We then submit tasks with:
pool.map(test_worker, range(POOLSIZE), 1)
By specifying a chunksize value of 1, the idea is that each thread will be given just 1 task to process with the first thread given argument 0, the second thread argument 1, etc. We must ensure that test_worker gives up control of the processor to the other threads in the pool. If it were to consist only of a return statement, the first thread might end up processing all the tasks. Essentially tasks are placed on a single queue in lists of chunksize tasks and each pool thread takes off the next available list and processes the tasks in the list, But if the task is so trivial, it is possible that the first thread could actually grab all the lists because it never gives up control of the processor to the other threads. To avoid this, we insert a call to time.sleep in our worker.
from multiprocessing.pool import ThreadPool
import threading
def test_worker(i):
# To ensure that the worker gives up control of the processor we sleep.
# Otherwise, the same thread may be given all the tasks to process.
time.sleep(.1)
return threading.get_ident(), i
def real_worker(x):
# return the argument squared and the id of the thread that did the work
return x**2, threading.get_ident()
POOLSIZE = 5
with ThreadPool(POOLSIZE) as pool:
# chunksize = 1 is critical to be sure that we have 1 task per thread:
thread_dict = {result[0]: result[1]
for result in pool.map(test_worker, range(POOLSIZE), 1)}
assert(len(thread_dict) == POOLSIZE)
print(thread_dict)
value, id = pool.apply(real_worker, (7,))
print(value) # should be 49
assert (id in thread_dict)
print('thread index = ', thread_dict[id])
Prints:
{16880: 0, 16152: 1, 7400: 2, 13320: 3, 168: 4}
49
thread index = 4
A Version That Does Not Use sleep
from multiprocessing.pool import ThreadPool
import threading
import time
def test_worker(i, event):
if event:
event.wait()
return threading.get_ident(), i
def real_worker(x):
return x**2, threading.get_ident()
# Let's use a really big pool size for a good test:
POOLSIZE = 500
events = [threading.Event() for _ in range(POOLSIZE - 1)]
with ThreadPool(POOLSIZE) as pool:
thread_dict = {}
# These first POOLSIZE - 1 tasks will wait until we set their events
results = [pool.apply_async(test_worker, args=(i, event)) for i, event in enumerate(events)]
# This last one is not passed an event and so it does not wait.
# When it completes, we can be sure the other tasks, which have been submitted before it
# have already been picked up by the other threads in the pool.
id, index = pool.apply(test_worker, args=(POOLSIZE - 1, None))
thread_dict[id] = index
# let the others complete:
for event in events:
event.set()
for result in results:
id, index = result.get()
thread_dict[id] = index
assert(len(thread_dict) == POOLSIZE)
value, id = pool.apply(real_worker, (7,))
print(value) # should be 49
assert (id in thread_dict)
print('thread index = ', thread_dict[id])
Prints:
49
thread index = 499
It is possible to get the index of the thread in the ThreadPool without using sleep, by using the initializer function. This is a function that is called once immediately after the thread is started. It can be used to acquire resources, such as a database connection, to use exactly one connection per thread.
Use threading.local() to make sure that each thread can store and access its own resource. In the example below we treat the index in the ThreadPool as a resource. Use a Queue to make sure no two threads grab the same resource.
from multiprocessing.pool import ThreadPool
import threading
import time
import queue
POOL_SIZE = 4
local_storage = threading.local()
def init_thread_resource(resources):
local_storage.pool_idx = resources.get(False)
print(f'\nThread {threading.get_ident()} has pool_idx {local_storage.pool_idx}')
## A thread can also initialize other things here, meant for only 1 thread, e.g.
# local_storage.db_connection = open_db_connection()
def task(item):
# When running this example you may see all the tasks are picked up by one thread.
# Uncomment time.sleep below to see each of the threads do some work.
# This is not required to assign a unique index to each thread.
# time.sleep(1)
return f'Thread {threading.get_ident()} with pool_idx {local_storage.pool_idx} handled {item}'
def run_concurrently():
# Initialize the resources
resources = queue.Queue(POOL_SIZE) # one resource per thread
for pool_idx in range(POOL_SIZE):
resources.put(pool_idx, False)
container = range(500, 500 + POOL_SIZE) # Offset by 500 to not confuse the items with the pool_idx
with ThreadPool(POOL_SIZE, init_thread_resource, [resources]) as pool:
records = pool.map(task, container)
print('\n'.join(records))
run_concurrently()
This outputs:
Thread 32904 with pool_idx 0 handled 500
Thread 14532 with pool_idx 1 handled 501
Thread 32008 with pool_idx 2 handled 502
Thread 31552 with pool_idx 3 handled 503

Multiprocesses will not run in Parallel on Windows on Jupyter Notebook

I'm currently working on Windows on jupyter notebook and have been struggling to get multiprocessing to work. It does not run all my async's in parallel it runs them singularly one at a time please provide some guidance where am I going wrong. I need to put the results into a variable for future use. What am I not understanding?
import multiprocessing as mp
import cylib
Pool = mp.Pool(processes=4)
result1 = Pool.apply_async(cylib.f, [v]) # evaluate asynchronously
result2 = Pool.apply_async(cylib.f, [x]) # evaluate asynchronously
result3 = Pool.apply_async(cylib.f, [y]) # evaluate asynchronously
result4 = Pool.apply_async(cylib.f, [z]) # evaluate asynchronously
vr = result1.get(timeout=420)
xr = result2.get(timeout=420)
yr = result3.get(timeout=420)
zr = result4.get(timeout=420)
The tasks are executing in parallel.
However, this is fetching the results synchronously i.e. "wait until result1 is ready, then wait until result2 is ready, .." and so on.
vr = result1.get(timeout=420)
xr = result2.get(timeout=420)
yr = result3.get(timeout=420)
zr = result4.get(timeout=420)
Consider the following example code, where each task is polled asynchronously
from time import sleep
import multiprocessing as mp
pool = mp.Pool(processes=4)
# Create tasks with longer wait first
tasks = {i: pool.apply_async(sleep, [t]) for i, t in enumerate(reversed(range(3)))}
done = set()
# Keep polling until all tasks complete
while len(done) < len(tasks):
for i, t in tasks.items():
# Skip completed tasks
if i in done:
continue
result = None
try:
result = t.get(timeout=0)
except mp.TimeoutError:
pass
else:
print("Task #:{} complete".format(i))
done.add(i)
You can replicate something like the above or use the callback argument on apply_async to perform some handling automatically as tasks complete.

Python Multiprocessing: Fastest way to signal an event to all processes?

I'm doing a monte carlo simulation with multiple processes using python's multiprocessing library. The processes basically guess some object and if it meets some condition it is added to a shared list. My calculation is finished if this list meets some condition.
My current code looks like this: (pseudocode without unimportant details)
mgr = Manager()
ns = mgr.Namespace()
ns.mylist = []
ns.othersharedstuff = x
killsig = mgr.Event()
processes = [ MyProcess(ns, killsig) for _ in range(8) ]
for p in processes: p.start()
for p in processes: p.join()
get data from ns.mylist()
def MyProcess.run(self):
localdata = y
while not killsig.is_set():
x = guessObject()
if x.meetsCondition():
add x to ns.mylist and put local data into ns()
if ns.mylist meets condition:
killsig.set()
put local data into ns()
When I replace 'while not killsig.is_set():' with 'while True:', the speed of my simulation increases by about 25%! (except it doesn't terminate anymore of course)
Is there a faster way than using signals? It is not important if the unsynchronized local data of each process is lost, so something involving process.terminate() would be fine too.
Since you've got the original process that has a list of all your subprocesses, why not use that to terminate the processes? I'm picturing something like this:
ns.othersharedstuff = x
killsig = mgr.Event()
processes = [ MyProcess(ns, killsig) for _ in range(8) ]
for p in processes: p.start()
while not killsig.isSet():
time.sleep(0.01) # 10 milliseconds
for p in processes: p.terminate()
get data from ns.mylist()
Then you can just set the while loop to while true:

Categories

Resources