How do I retrieve output from Multiprocessing in Python? - python

So, I'm trying to speed up one routine by using the Multiprocessing module in Python. I want to be able to read several .csv files by splitting the job among several cores, for that I have:
def csvreader(string):
from numpy import genfromtxt;
time,signal=np.genfromtxt(string, delimiter=',',unpack="true")
return time,signal
Then I call this function by saying:
if __name__ == '__main__':
for i in range(0,2):
p = multiprocessing.Process(target=CSVReader.csvreader, args=(string_array[i],))
p.start()
The thing is that this doesn't store any output. I have read all the forums online and seen that there might be a way with multiprocessing.queue but I don't understand it quite well.
Is there any simple and straightforward method?

Your best bet are multiprocessing.Queue or multiprocessing.Pipe, which are designed exactly for this problem. They allow you to send data between processes in a safe and easy way.
If you'd like to return the output of your csvreader function, you should pass another argument to it, which is the multiprocessing.Queue through which the data will be sent back to the main process. Instead of returning the values, place them on the queue, and the main process will retrieve them at some point later. If they're not ready when the process tries to get them, by default it will just block (wait) until they are available
Your function would now look like this:
def cvsreader(string, q):
q.put(np.genfromtxt(string, delimiter=',', unpack="true"))
The main routine would be:
if __name__ == '__main__'
q = multiprocessing.Queue()
for i in range(2):
p = multiprocessing.Process(target=csvreader, args=(string_array[i], q,))
p.start()
# Do anything else you need in here
time=np.empty(2,dtype='object')
signal=np.empty(2,dtype='object')
for i in range(2):
time[i], signal[i] = q.get() # Returns output or blocks until ready
# Process my output
Note that you have to call Queue.get() for each item you want to return.
Have a look at the documentation on the multiprocessing module for more examples and information.

Using the example from the introduction to the documentation:
if __name__ == '__main__':
pool = Pool(2)
results = pool.map(CSVReader.csvreader, string_array[:2])
print(results)

Related

Python multiprocessing and Manager

I am using Python's multiprocessing to create a parallel application. Processes need to share some data, for which I use a Manager. However, I have some common functions which processes need to call and which need to access the data stored by the Manager object. My question is whether I can avoid needing to pass the Manager instance to these common functions as an argument and rather use it like a global. In other words, consider the following code:
import multiprocessing as mp
manager = mp.Manager()
global_dict = manager.dict(a=[0])
def add():
global_dict['a'] += [global_dict['a'][-1]+1]
def foo_parallel(var):
add()
print var
num_processes = 5
p = []
for i in range(num_processes):
p.append(mp.Process(target=foo_parallel,args=(global_dict,)))
[pi.start() for pi in p]
[pi.join() for pi in p]
This runs fine and returns p=[0,1,2,3,4,5] on my machine. However, is this "good form"? Is this a good way to doing it, just as good as defining add(var) and calling add(var) instead?
Your code example seems to have bigger problems than form. You get your desired output only with luck. Repeated execution will yield different results. That's because += is not an atomic operation. Multiple processes can read the same old value one after another, before any of them has updated it and they will write back the same values. To prevent this behaviour, you'll have to use a Manager.Lock additionally.
To your original question about "good form".
IMO it would be cleaner, to let the main-function of the child process foo_parallel, pass global_dict explicitly into a generic function add(var). That would be a form of dependency injection and has some advantages. In your example non-exhaustively:
allows isolated testing
increases code reusability
easier debugging (detecting non-accessibility of the managed object shouldn't be delayed until addis called (fail fast)
less boilerplate code (for example try-excepts blocks on resources multiple functions need)
As a side note. Using list comprehensions only for it's side effects is considered a 'code smell'. If you don't need a list as result, just use a for-loop.
Code:
import os
from multiprocessing import Process, Manager
def add(l):
l += [l[-1] + 1]
return l
def foo_parallel(global_dict, lock):
with lock:
l = global_dict['a']
global_dict['a'] = add(l)
print(os.getpid(), global_dict)
if __name__ == '__main__':
N_WORKERS = 5
with Manager() as manager:
lock = manager.Lock()
global_dict = manager.dict(a=[0])
pool = [Process(target=foo_parallel, args=(global_dict, lock))
for _ in range(N_WORKERS)]
for p in pool:
p.start()
for p in pool:
p.join()
print('result', global_dict)

Python'S multiporcessing.Queue's get() method

I am writing my first Python 2.7 multiprocessing program (woohoo).
I am using multiprocessing Queues to retrieve data from my sub processes. My question is about the Queues .get() method. Is there any guarantee that I will get the full object (no matter how large it is) when I call the method? If not how will it be split.
The doc says: “Remove and return an item from the queue. […]”. But I am not sure if this means that I might end up getting chunks of an object or if it is rebuild by the methods internals.
Here is some sample code: (stats might get pretty large)
p = Process(target=process_analyze_db, args=(db_names[j], j, queue_stats))
processes.append(p)
p.start()
while 1:
running = any(p.is_alive() for p in processes)
while not queue_stats.empty(): #Is this loop necessary?
data = queue_stats.get_nowait()
results[data[0]] = data[1]
if not running:
break
#In the process
def process_analyze_db (db_name, profile_nr, queue _stats):
#Do lots of stuff
queue_stats.put([profile_nr, stats])

Python multiprocessing an enourmous amount of data

I have searched the site but I am not sure precisely what terms would yield relevant answers, my apologies if this question is redundant.
I need to process a very very large matrix (14,000,000 * 250,000) and would like to exploit Python's multiprocessing module to speed things up. For each pair of columns in the matrix I need to apply a function which will then store the results in a proprietary class.
I will be implementing a double four loop which provides the necessary combinations of columns.
I do not want to load up a pool with 250,000 tasks as I fear the memory usage will be significant.Ideally, I would like to have one column then be tasked out amongst the pool I.e
Process 1 takes Column A and Column B and a function F takes A,B and G and then stores the result in Class G[A,B]
Process 2 takes Column A and Column C and proceeds similarly
The processes will never access the same element of G.
So I would like to pause the for loop every N tasks. The set/get methods of G will be overriden to perform some back end tasks.
What I do not understand is whether or not pausing the loop is necessary? I.e is Python smart enough to only take what it can work on? Or will it be populating a massive amount of tasks?
Lastly, I am unclear of how the results work. I just want them to be set in G and not return anything. I do not want to have to worry about about .get() etc. but from my understanding the pool method returns a result object. Can I just ignore this?
Is there a better way? Am I completly lost?
First off - you will want to create a multiprocessing pool class. You setup how many workers you want and then use map to start up tasks. I am sure you already know but here is the python multiprocessing docs.
You say that you don't want to return data because you don't need to but how are you planning on viewing results? Will each task write the data to disk? To pass data between your processes you will want to use something like the multiprocessing queue.
Here is example code from the link on how to use process and queue:
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
And this is an example of using the Pool:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously
print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
Edit: #goncalopp makes a very important point that you may not want to do heavy numerical calculations in python due to how slow it is. Numpy is a great package for doing number crunching.
If you are heavily IO bound due to writing to disk on each process you should consider running something like 4*num_processors so that you always have something to do. You also should make sure you have a very fast disk :)

Multiprocessing with python3 only runs once

I have a problem running multiple processes in python3 .
My program does the following:
1. Takes entries from an sqllite database and passes them to an input_queue
2. Create multiple processes that take items off the input_queue, run it through a function and output the result to the output queue.
3. Create a thread that takes items off the output_queue and prints them (This thread is obviously started before the first 2 steps)
My problem is that currently the 'function' in step 2 is only run as many times as the number of processes set, so for example if you set the number of processes to 8, it only runs 8 times then stops. I assumed it would keep running until it took all items off the input_queue.
Do I need to rewrite the function that takes the entries out of the database (step 1) into another process and then pass its output queue as an input queue for step 2?
Edit:
Here is an example of the code, I used a list of numbers as a substitute for the database entries as it still performs the same way. I have 300 items on the list and I would like it to process all 300 items, but at the moment it just processes 10 (the number of processes I have assigned)
#!/usr/bin/python3
from multiprocessing import Process,Queue
import multiprocessing
from threading import Thread
## This is the class that would be passed to the multi_processing function
class Processor:
def __init__(self,out_queue):
self.out_queue = out_queue
def __call__(self,in_queue):
data_entry = in_queue.get()
result = data_entry*2
self.out_queue.put(result)
#Performs the multiprocessing
def perform_distributed_processing(dbList,threads,processor_factory,output_queue):
input_queue = Queue()
# Create the Data processors.
for i in range(threads):
processor = processor_factory(output_queue)
data_proc = Process(target = processor,
args = (input_queue,))
data_proc.start()
# Push entries to the queue.
for entry in dbList:
input_queue.put(entry)
# Push stop markers to the queue, one for each thread.
for i in range(threads):
input_queue.put(None)
data_proc.join()
output_queue.put(None)
if __name__ == '__main__':
output_results = Queue()
def output_results_reader(queue):
while True:
item = queue.get()
if item is None:
break
print(item)
# Establish results collecting thread.
results_process = Thread(target = output_results_reader,args = (output_results,))
results_process.start()
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
# Perform multi processing
perform_distributed_processing(dbList,10,Processor,output_results)
# Wait for it all to finish.
results_process.join()
A collection of processes that service an input queue and write to an output queue is pretty much the definition of a process pool.
If you want to know how to build one from scratch, the best way to learn is to look at the source code for multiprocessing.Pool, which is pretty simply Python, and very nicely written. But, as you might expect, you can just use multiprocessing.Pool instead of re-implementing it. The examples in the docs are very nice.
But really, you could make this even simpler by using an executor instead of a pool. It's hard to explain the difference (again, read the docs for both modules), but basically, a future is a "smart" result object, which means instead of a pool with a variety of different ways to run jobs and get results, you just need a dumb thing that doesn't know how to do anything but return futures. (Of course in the most trivial cases, the code looks almost identical either way…)
from concurrent.futures import ProcessPoolExecutor
def Processor(data_entry):
return data_entry*2
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
yield from executor.map(processor_factory, dbList)
if __name__ == '__main__':
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
for result in perform_distributed_processing(dbList, 8, Processor):
print(result)
Or, if you want to handle them as they come instead of in order:
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
fs = (executor.submit(processor_factory, db) for db in dbList)
yield from map(Future.result, as_completed(fs))
Notice that I also replaced your in-process queue and thread, because it wasn't doing anything but providing a way to interleave "wait for the next result" and "process the most recent result", and yield (or yield from, in this case) does that without all the complexity, overhead, and potential for getting things wrong.
Don't try to rewrite the whole multiprocessing library again. I think you can use any of multiprocessing.Pool methods depending on your needs - if this is a batch job you can even use the synchronous multiprocessing.Pool.map() - only instead of pushing to input queue, you need to write a generator that yields input to the threads.

Python multiprocessing for parallel processes

I'm sorry if this is too simple for some people, but I still don't get the trick with python's multiprocessing. I've read
http://docs.python.org/dev/library/multiprocessing
http://pymotw.com/2/multiprocessing/basics.html
and many other tutorials and examples that google gives me... many of them from here too.
Well, my situation is that I have to compute many numpy matrices and I need to store them in a single numpy matrix afterwards. Let's say I want to use 20 cores (or that I can use 20 cores) but I haven't managed to successfully use the pool resource since it keeps the processes alive till the pool "dies". So I thought on doing something like this:
from multiprocessing import Process, Queue
import numpy as np
def f(q,i):
q.put( np.zeros( (4,4) ) )
if __name__ == '__main__':
q = Queue()
for i in range(30):
p = Process(target=f, args=(q,))
p.start()
p.join()
result = q.get()
while q.empty() == False:
result += q.get()
print result
but then it looks like the processes don't run in parallel but they run sequentially (please correct me if I'm wrong) and I don't know if they die after they do their computation (so for more than 20 processes the ones that did their part leave the core free for another process). Plus, for a very large number (let's say 100.000), storing all those matrices (which may be really big too) in a queue will use a lot of memory, rendering the code useless since the idea is to put every result on each iteration in the final result, like using a lock (and its acquire() and release() methods), but if this code isn't for parallel processing, the lock is useless too...
I hope somebody may help me.
Thanks in advance!
You are correct, they are executing sequentially in your example.
p.join() causes the current thread to block until it is finished executing. You'll either want to join your processes individually outside of your for loop (e.g., by storing them in a list and then iterating over it) or use something like numpy.Pool and apply_async with a callback. That will also let you add it to your results directly rather than keeping the objects around.
For example:
def f(i):
return i*np.identity(4)
if __name__ == '__main__':
p=Pool(5)
result = np.zeros((4,4))
def adder(value):
global result
result += value
for i in range(30):
p.apply_async(f, args=(i,), callback=adder)
p.close()
p.join()
print result
Closing and then joining the pool at the end ensures that the pool's processes have completed and the result object is finished being computed. You could also investigate using Pool.imap as a solution to your problem. That particular solution would look something like this:
if __name__ == '__main__':
p=Pool(5)
result = np.zeros((4,4))
im = p.imap_unordered(f, range(30), chunksize=5)
for x in im:
result += x
print result
This is cleaner for your specific situation, but may not be for whatever you are ultimately trying to do.
As to storing all of your varied results, if I understand your question, you can just add it off into a result in the callback method (as above) or item-at-a-time using imap/imap_unordered (which still stores the results, but you'll clear it as it builds). Then it doesn't need to be stored for longer than it takes to add to the result.

Categories

Resources