Python Multiprocessing Worker/Queue - python

I have a python function that has to run 12 times in total. I have this set up currently to use Pool from the multiprocessing library to run up to all of them in parallel. Typically I run 6 at a time because the function is CPU intensive and running 12 in parallel often causes the program to crash. When we do 6 at a time, the second set of 6 will not begin until all of the first 6 processes are finished. Ideally, we would like another one (e.g. the 7th) to kick off as soon as one from the initial batch of 6 is finished- So that 6 are running at once while there are more to start. Right now the code looks like this (it would be called twice, passing the first 6 elements in one list and then the second 6 in another:
from multiprocessing import Pool
def start_pool(project_list):
pool = Pool(processes=6)
pool.map(run_assignments_parallel,project_list[0:6])
So i have been trying to implement a worker/queue solution and have run into some issues. I have a worker function that looks like this:
def worker(work_queue, done_queue):
try:
for proj in iter(work_queue.get, 'STOP'):
print proj
run_assignments_parallel(proj)
done_queue.put('finished ' + proj )
except Exception, e:
done_queue.put("%s failed on %s with: %s" % (current_process().name, proj, e.message))
return True
And the code to call the worker function is as follows:
workers = 6
work_queue = Queue()
done_queue = Queue()
processes = []
for project in project_list:
print project
work_queue.put(project)
for w in xrange(workers):
p = Process(target=worker, args=(work_queue, done_queue))
p.start()
processes.append(p)
work_queue.put('STOP')
for p in processes:
p.join()
done_queue.put('STOP')
for status in iter(done_queue.get, 'STOP'):
print status
project_list is just a list of paths for the 12 projects that need to be run in the function 'run_assignments_parallel.'
The way this is written now, the function is getting called more than once for the same process (project) and I cant really tell what is going on. This code is based on an example i found and I am pretty sure the looping structure is messed up. Any help would be great and I aplogize for my ignorance on the matter. Thanks!

Ideally, we would like another one (e.g. the 7th) to kick off as soon as one from the initial batch of 6 is finished- So that 6 are running at once while there are more to start.
All you need to change is to pass all 12 input parameters instead of 6:
from multiprocessing import Pool
pool = Pool(processes=6) # run no more than 6 at a time
pool.map(run_assignments_parallel, project_list) # pass full list (12 items)

You can use the MPipe module.
Create a 6-worker, single-stage pipeline and feed in all your projects as tasks. Then just read the results (in your case, statuses) off the end.
from mpipe import Pipeline, OrderedStage
...
pipe = Pipeline(OrderedStage(run_assignments_parallel), 6)
for project in project_list:
pipe.put(project)
pipe.put(None) # Signal end of input.
for status in pipe.results():
print(status)

Related

Modify global variable with Pool in multiprocessing module

I'm not familiar with multiprocessing module. I am tring to verify that variables in different processes are irrelevant. After the test, I find different processes probably "share" variables. That happens when process has the same pid. I am not sure if there is some relationship?
Environment : Windows 10 ; python 3.7
# -*- coding: utf-8 -*-
import os
from multiprocessing import Pool
p=0
def Child_process(id_number):
global p
print('Task start: %s(%s)' % (id_number, os.getpid()))
print('p = %d' % p)
p=p+1
print('Task {} end'.format(id_number))
if __name__ == '__main__':
p = Pool(4)
p.map(Child_process,range(5))
p.close()
p.join()
The result is:
Task start: 0(7668)
p = 0
Task start: 1(10384)
Task 0 end
p = 0
Task start: 2(7668)
p = 1
Task 1 end
Task 2 end
Task start: 3(7668)
Task start: 4(10384)
p = 1
Task 4 end
p = 2
Task 3 end
I think the p should always be 0, but it increases when different processes have the same pid?
By definition, a thread/process pool will re-use the same thread/process. This lets you setup resources in the when the thread/process starts so that each thread/process won't have to initialize them each time. This includes global variables, open files, sockets, etc. You can do the one time initialization by passing an initializer function to the thread/process. So if you set or increment the variable p it will remain set throughout the various runs of the process. If you want the variable to always start at 0 for each run, you'll need to set it to 0 at the start of each run.
This note is in the multiprocessing.pool.Pool class:
Note: Worker processes within a Pool typically live for the complete duration of the Pool’s work queue. A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the Pool exposes this ability to the end user.

Running Python on multiple cores

I have created a (rather large) program that takes quite a long time to finish, and I started looking into ways to speed up the program.
I found that if I open task manager while the program is running only one core is being used.
After some research, I found this website:
Why does multiprocessing use only a single core after I import numpy? which gives a solution of os.system("taskset -p 0xff %d" % os.getpid()),
however this doesn't work for me, and my program continues to run on a single core.
I then found this:
is python capable of running on multiple cores?,
which pointed towards using multiprocessing.
So after looking into multiprocessing, I came across this documentary on how to use it https://docs.python.org/3/library/multiprocessing.html#examples
I tried the code:
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
a = input("Finished")
After running the code (not in IDLE) It said this:
Finished
hello bob
Finished
Note: after it said Finished the first time I pressed enter
So after this I am now even more confused and I have two questions
First: It still doesn't run with multiple cores (I have an 8 core Intel i7)
Second: Why does it input "Finished" before its even run the if statement code (and it's not even finished yet!)
To answer your second question first, "Finished" is printed to the terminal because a = input("Finished") is outside of your if __name__ == '__main__': code block. It is thus a module level constant which gets assigned when the module is first loaded and will execute before any code in the module runs.
To answer the first question, you only created one process which you run and then wait to complete before continuing. This gives you zero benefits of multiprocessing and incurs overhead of creating the new process.
Because you want to create several processes, you need to create a pool via a collection of some sort (e.g. a python list) and then start all of the processes.
In practice, you need to be concerned with more than the number of processors (such as the amount of available memory, the ability to restart workers that crash, etc.). However, here is a simple example that completes your task above.
import datetime as dt
from multiprocessing import Process, current_process
import sys
def f(name):
print('{}: hello {} from {}'.format(
dt.datetime.now(), name, current_process().name))
sys.stdout.flush()
if __name__ == '__main__':
worker_count = 8
worker_pool = []
for _ in range(worker_count):
p = Process(target=f, args=('bob',))
p.start()
worker_pool.append(p)
for p in worker_pool:
p.join() # Wait for all of the workers to finish.
# Allow time to view results before program terminates.
a = input("Finished") # raw_input(...) in Python 2.
Also note that if you join workers immediately after starting them, you are waiting for each worker to complete its task before starting the next worker. This is generally undesirable unless the ordering of the tasks must be sequential.
Typically Wrong
worker_1.start()
worker_1.join()
worker_2.start() # Must wait for worker_1 to complete before starting worker_2.
worker_2.join()
Usually Desired
worker_1.start()
worker_2.start() # Start all workers.
worker_1.join()
worker_2.join() # Wait for all workers to finish.
For more information, please refer to the following links:
https://docs.python.org/3/library/multiprocessing.html
Dead simple example of using Multiprocessing Queue, Pool and Locking
https://pymotw.com/2/multiprocessing/basics.html
https://pymotw.com/2/multiprocessing/communication.html
https://pymotw.com/2/multiprocessing/mapreduce.html

Control Number of Processes in Python using multiprocessing

I would like to control the number of Processes spawned while using the multiprocessing package.
Say I only want three processes active at the same time. The only way I know how to do this is:
import multiprocessing
import Queue
def worker(arg):
## Do stuff
return returnvalue
argument = list(1,2,3,4,5,6)
aliveprocesses = 0
jobs = Queue.Queue()
for arg in argument:
while jobs.qsize() > 2:
jobs.get().join()
p = multiprocessing.Process(target=worker,args=(arg,))
jobs.put(p)
p.start()
Basically I only know how to monitor one process at a time using the Process.join() function. I monitor the oldest process until it is done and then create a new process. For my program the oldest process should finish before the others, on average. But who knows? Maybe another process finishes first and I would have no way of knowing.
The only alternative I can think of is something like this:
import multiprocessing
import time
def worker(arg):
## Do stuff
return returnvalue
argument = list(1,2,3,4,5,6)
aliveprocesses = 0
jobs = set()
for arg in argument:
while aliveprocesses > 2:
for j in jobs:
if not j.is_alive():
aliveprocesses -= 1
break
time.sleep(1)
p = multiprocessing.Process(target=worker,args=(arg,))
jobs.put(p)
p.start()
aliveprocesses += 1
In the above function you are checking all of processes if they are still alive. If they are all still alive you sleep for a bit and then check again until there is a dead process after which you spawn a new process. The problem here is that from what I understand the time.sleep() function is not a particularly efficient way to wait for a process to end.
Ideally I would like a function "superjoin()" like Process.join() only it uses a set of Process objects and when one Process within the set returns then superjoin() returns. And superjoin() does not itself use the time.sleep() function ie it's not being "passed the buck"
Since you seem to have a single (parallel) task, instead of managing processes individually, you should use the higher-level multiprocessing.Pool, which makes managing the number of processes easier.
You can't join a pool, but you have blocking calls (such as Pool.map) that perform this kind of task.
If you need finer-grained control, you may want to adapt Pool's source code

Multiprocessing with python3 only runs once

I have a problem running multiple processes in python3 .
My program does the following:
1. Takes entries from an sqllite database and passes them to an input_queue
2. Create multiple processes that take items off the input_queue, run it through a function and output the result to the output queue.
3. Create a thread that takes items off the output_queue and prints them (This thread is obviously started before the first 2 steps)
My problem is that currently the 'function' in step 2 is only run as many times as the number of processes set, so for example if you set the number of processes to 8, it only runs 8 times then stops. I assumed it would keep running until it took all items off the input_queue.
Do I need to rewrite the function that takes the entries out of the database (step 1) into another process and then pass its output queue as an input queue for step 2?
Edit:
Here is an example of the code, I used a list of numbers as a substitute for the database entries as it still performs the same way. I have 300 items on the list and I would like it to process all 300 items, but at the moment it just processes 10 (the number of processes I have assigned)
#!/usr/bin/python3
from multiprocessing import Process,Queue
import multiprocessing
from threading import Thread
## This is the class that would be passed to the multi_processing function
class Processor:
def __init__(self,out_queue):
self.out_queue = out_queue
def __call__(self,in_queue):
data_entry = in_queue.get()
result = data_entry*2
self.out_queue.put(result)
#Performs the multiprocessing
def perform_distributed_processing(dbList,threads,processor_factory,output_queue):
input_queue = Queue()
# Create the Data processors.
for i in range(threads):
processor = processor_factory(output_queue)
data_proc = Process(target = processor,
args = (input_queue,))
data_proc.start()
# Push entries to the queue.
for entry in dbList:
input_queue.put(entry)
# Push stop markers to the queue, one for each thread.
for i in range(threads):
input_queue.put(None)
data_proc.join()
output_queue.put(None)
if __name__ == '__main__':
output_results = Queue()
def output_results_reader(queue):
while True:
item = queue.get()
if item is None:
break
print(item)
# Establish results collecting thread.
results_process = Thread(target = output_results_reader,args = (output_results,))
results_process.start()
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
# Perform multi processing
perform_distributed_processing(dbList,10,Processor,output_results)
# Wait for it all to finish.
results_process.join()
A collection of processes that service an input queue and write to an output queue is pretty much the definition of a process pool.
If you want to know how to build one from scratch, the best way to learn is to look at the source code for multiprocessing.Pool, which is pretty simply Python, and very nicely written. But, as you might expect, you can just use multiprocessing.Pool instead of re-implementing it. The examples in the docs are very nice.
But really, you could make this even simpler by using an executor instead of a pool. It's hard to explain the difference (again, read the docs for both modules), but basically, a future is a "smart" result object, which means instead of a pool with a variety of different ways to run jobs and get results, you just need a dumb thing that doesn't know how to do anything but return futures. (Of course in the most trivial cases, the code looks almost identical either way…)
from concurrent.futures import ProcessPoolExecutor
def Processor(data_entry):
return data_entry*2
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
yield from executor.map(processor_factory, dbList)
if __name__ == '__main__':
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
for result in perform_distributed_processing(dbList, 8, Processor):
print(result)
Or, if you want to handle them as they come instead of in order:
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
fs = (executor.submit(processor_factory, db) for db in dbList)
yield from map(Future.result, as_completed(fs))
Notice that I also replaced your in-process queue and thread, because it wasn't doing anything but providing a way to interleave "wait for the next result" and "process the most recent result", and yield (or yield from, in this case) does that without all the complexity, overhead, and potential for getting things wrong.
Don't try to rewrite the whole multiprocessing library again. I think you can use any of multiprocessing.Pool methods depending on your needs - if this is a batch job you can even use the synchronous multiprocessing.Pool.map() - only instead of pushing to input queue, you need to write a generator that yields input to the threads.

How to spawn parallel child processes on a multi-processor system?

I have a Python script that I want to use as a controller to another Python script. I have a server with 64 processors, so want to spawn up to 64 child processes of this second Python script. The child script is called:
$ python create_graphs.py --name=NAME
where NAME is something like XYZ, ABC, NYU etc.
In my parent controller script I retrieve the name variable from a list:
my_list = [ 'XYZ', 'ABC', 'NYU' ]
So my question is, what is the best way to spawn off these processes as children? I want to limit the number of children to 64 at a time, so need to track the status (if the child process has finished or not) so I can efficiently keep the whole generation running.
I looked into using the subprocess package, but rejected it because it only spawns one child at a time. I finally found the multiprocessor package, but I admit to being overwhelmed by the whole threads vs. subprocesses documentation.
Right now, my script uses subprocess.call to only spawn one child at a time and looks like this:
#!/path/to/python
import subprocess, multiprocessing, Queue
from multiprocessing import Process
my_list = [ 'XYZ', 'ABC', 'NYU' ]
if __name__ == '__main__':
processors = multiprocessing.cpu_count()
for i in range(len(my_list)):
if( i < processors ):
cmd = ["python", "/path/to/create_graphs.py", "--name="+ my_list[i]]
child = subprocess.call( cmd, shell=False )
I really want it to spawn up 64 children at a time. In other stackoverflow questions I saw people using Queue, but it seems like that creates a performance hit?
What you are looking for is the process pool class in multiprocessing.
import multiprocessing
import subprocess
def work(cmd):
return subprocess.call(cmd, shell=False)
if __name__ == '__main__':
count = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=count)
print pool.map(work, ['ls'] * count)
And here is a calculation example to make it easier to understand. The following will divide 10000 tasks on N processes where N is the cpu count. Note that I'm passing None as the number of processes. This will cause the Pool class to use cpu_count for the number of processes (reference)
import multiprocessing
import subprocess
def calculate(value):
return value * 10
if __name__ == '__main__':
pool = multiprocessing.Pool(None)
tasks = range(10000)
results = []
r = pool.map_async(calculate, tasks, callback=results.append)
r.wait() # Wait on the results
print results
Here is the solution I came up, based on Nadia and Jim's comments. I am not sure if it is the best way, but it works. The original child script being called needs to be a shell script because I need to use some 3rd party apps including Matlab. So I had to take it out of Python and code it in bash.
import sys
import os
import multiprocessing
import subprocess
def work(staname):
print 'Processing station:',staname
print 'Parent process:', os.getppid()
print 'Process id:', os.getpid()
cmd = [ "/bin/bash" "/path/to/executable/create_graphs.sh","--name=%s" % (staname) ]
return subprocess.call(cmd, shell=False)
if __name__ == '__main__':
my_list = [ 'XYZ', 'ABC', 'NYU' ]
my_list.sort()
print my_list
# Get the number of processors available
num_processes = multiprocessing.cpu_count()
threads = []
len_stas = len(my_list)
print "+++ Number of stations to process: %s" % (len_stas)
# run until all the threads are done, and there is no data left
for list_item in my_list:
# if we aren't using all the processors AND there is still data left to
# compute, then spawn another thread
if( len(threads) < num_processes ):
p = multiprocessing.Process(target=work,args=[list_item])
p.start()
print p, p.is_alive()
threads.append(p)
else:
for thread in threads:
if not thread.is_alive():
threads.remove(thread)
Does this seem like a reasonable solution? I tried to use Jim's while loop format, but my script just returned nothing. I am not sure why that would be. Here is the output when I run the script with Jim's 'while' loop replacing the 'for' loop:
hostname{me}2% controller.py
['ABC', 'NYU', 'XYZ']
Number of processes: 64
+++ Number of stations to process: 3
hostname{me}3%
When I run it with the 'for' loop, I get something more meaningful:
hostname{me}6% controller.py
['ABC', 'NYU', 'XYZ']
Number of processes: 64
+++ Number of stations to process: 3
Processing station: ABC
Parent process: 1056
Process id: 1068
Processing station: NYU
Parent process: 1056
Process id: 1069
Processing station: XYZ
Parent process: 1056
Process id: 1071
hostname{me}7%
So this works, and I am happy. However, I still don't get why I can't use Jim's 'while' style loop instead of the 'for' loop I am using. Thanks for all the help - I am impressed with the breadth of knowledge # stackoverflow.
I would definitely use multiprocessing rather than rolling my own solution using subprocess.
I don't think you need queue unless you intend to get data out of the applications (Which if you do want data, I think it may be easier to add it to a database anyway)
but try this on for size:
put the contents of your create_graphs.py script all into a function called "create_graphs"
import threading
from create_graphs import create_graphs
num_processes = 64
my_list = [ 'XYZ', 'ABC', 'NYU' ]
threads = []
# run until all the threads are done, and there is no data left
while threads or my_list:
# if we aren't using all the processors AND there is still data left to
# compute, then spawn another thread
if (len(threads) < num_processes) and my_list:
t = threading.Thread(target=create_graphs, args=[ my_list.pop() ])
t.setDaemon(True)
t.start()
threads.append(t)
# in the case that we have the maximum number of threads check if any of them
# are done. (also do this when we run out of data, until all the threads are done)
else:
for thread in threads:
if not thread.isAlive():
threads.remove(thread)
I know that this will result in 1 less threads than processors, which is probably good, it leaves a processor to manage the threads, disk i/o, and other things happening on the computer. If you decide you want to use the last core just add one to it
edit: I think I may have misinterpreted the purpose of my_list. You do not need my_list to keep track of the threads at all (as they're all referenced by the items in the threads list). But this is a fine way of feeding the processes input - or even better: use a generator function ;)
The purpose of my_list and threads
my_list holds the data that you need to process in your function
threads is just a list of the currently running threads
the while loop does two things, start new threads to process the data, and check if any threads are done running.
So as long as you have either (a) more data to process, or (b) threads that aren't finished running.... you want to program to continue running. Once both lists are empty they will evaluate to False and the while loop will exit

Categories

Resources