How to run multiple functions sequentialy with multiprocessing python? - python

I'm trying to run multiple functions with multiprocessing and running into a bit of wall. I want to run an initial function to completion on all processes/inputs and then run 2 or 3 other functions in parallel on the output of the first function. I've already got my search function. the code is there for the sake of explanation.
I'm not sure how to continue the code from here. I've put my initial attempt below. I want all instances of process1 to finish and then process2 and process3 to start in parallel.
Code is something like:
from multiprocessing import Pool
def init(*args):
global working_dir
[working_dir] = args
def process1(InFile):
python.DoStuffWith.InFile
Output.save.in(working_dir)
def process2(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def process3(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def MCprocess():
working_dir = input("enter input: ")
inputfiles1 = []
python.searchfunction.appendfilesin.working_dir.to.inputfiles1
with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
pool.map(process1, inputfiles1)
pool.close()
#Editted Code
queue = multiprocessing.Queue
queue.put(working_dir)
queue.put(working_dir)
ProcessTwo = multiprocessing.Process(target=process2, args=(queue,))
ProcessThree = multiprocessing.Process(target=process3, args=(queue,))
ProcessTwo.start()
ProcessThree.start()
#OLD CODE
#with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
#pool.map_async(process2)
#pool.map_async(process3)
if __name__ == '__main__':
MCprocess()

Your best bet is to use an Event. The first process calls event.set() when it is done to indicate that the event has happened. The waiting processes use event.wait() or one of its variants to wait to be awoken that the event has been set.

Related

How can you code a nested concurrency in python?

My code has the following scheme:
class A():
def evaluate(self):
b = B()
for i in range(30):
b.run()
class B():
def run(self):
pass
if __name__ == '__main__':
a = A()
for i in range(10):
a.evaluate()
And I want to have two level of concurrency, the first one is on the evaluate method and the second one is on the run method (nested concurrency). The question is how to introduce this concurrency using the Pool class of the multiprocessing module? Should I pass explicitly number of cores?. The solution should not create processes greater than number of multiprocessing.cpu_count().
note: assume that number of cores is greater than 10 .
Edit:
I have seen a lot of comments that say that python does not have true concurrency due to GIL, this is true for python multi-threading but for multiprocessing this is not quit correct look here, also I have timed it also this article did, and the results show that it can go faster than sequential execution.
Your comment touches on a possible solution. In order to have "nested" concurrency you could have 2 separate pools. This would result in a "flat" structure program instead of a nest program. Additionally, it decouples A from B, A now knows nothing about b it just publishes to a generic queue. The example below uses a single process to illustrate wiring up concurrent workers communicating across an asynchronous queue but it could easily be replaced with a pool:
import multiprocessing as mp
class A():
def __init__(self, in_q, out_q):
self.in_q = in_q
self.out_q = out_q
def evaluate(self):
"""
Reads from input does work and process output
"""
while True:
job = self.in_q.get()
for i in range(30):
self.out_q.put(i)
class B():
def __init__(self, in_q):
self.in_q = in_q
def run(self):
"""
Loop over queue and process items, optionally configure
with another queue to "sink" the processing pipeline
"""
while True:
job = self.in_q.get()
if __name__ == '__main__':
# create the queues to wire up our concurrent worker pools
A_q = mp.Queue()
AB_q = mp.Queue()
a = A(in_q=A_q, out_q=AB_q)
b = B(in_q=AB_q)
p = mp.Process(target=a.evaluate)
p.start()
p2 = mp.Process(target=b.run)
p2.start()
for i in range(10):
A_q.put(i)
p.join()
p2.join()
This is a common pattern in golang.

Force Multiprocessing Pool to iterate over argument

I'm using multiprocessing Pool to run a function for multiple arguments over and over. I use a list for jobs that filled by another thread and a job_handler function to handles each job. My problem is that when the list becomes empty the Pool will end the function. I want to keep the pool alive and wait until the list to fill. Actually, there are two scenarios to solve this.
1.Use one pool but would end after list become empty:
from multiprocessing import Pool
from threading import Thread
from time import sleep
def job_handler(i):
print("Doing job:", i)
sleep(0.5)
def job_adder():
i = 0
while True:
jobs.append(i)
i += 1
sleep(0.1)
if __name__ == "__main__":
pool = Pool(4)
jobs = []
thr = Thread(target=job_adder)
thr.start()
# wait for job_adder to add to list
sleep(1)
pool.map_async(job_handler, jobs)
while True:
pass
2.Multiple map_async:
from multiprocessing import Pool
from threading import Thread
from time import sleep
def job_handler(i):
print("Doing job:", i)
sleep(0.5)
def job_adder():
i = 0
while True:
jobs.append(i)
i += 1
sleep(0.1)
if __name__ == "__main__":
pool = Pool(4)
jobs = []
thr = Thread(target=job_adder)
thr.start()
while True:
for job in jobs:
pool1 = pool.map_async(job_handler, (job,))
jobs.remove(job)
What is the difference between the two? I think the first option would be nicer because the map itself would handle the iteration. My aim is to get better performance to handle each job separately.
The need to “slow down” a Pool comes up in a number of situations. This case is easier than some:
q=queue.Queue()
m=pool.imap(iter(q.get,None))
You can also use imap_unordered; None is a sentinel to terminate the Pool. The Pool has to use a thread to collect the tasks (since those functions are “lazier [than] map()”), and it will block on q as needed.

python - multiprocessing with queue

Here is my code below , I put string in queue , and hope dowork2 to do something work , and return char in shared_queue
but I always get nothing at while not shared_queue.empty()
please give me some point , thanks.
import time
import multiprocessing as mp
class Test(mp.Process):
def __init__(self, **kwargs):
mp.Process.__init__(self)
self.daemon = False
print('dosomething')
def run(self):
manager = mp.Manager()
queue = manager.Queue()
shared_queue = manager.Queue()
# shared_list = manager.list()
pool = mp.Pool()
results = []
results.append(pool.apply_async(self.dowork2,(queue,shared_queue)))
while True:
time.sleep(0.2)
t =time.time()
queue.put('abc')
queue.put('def')
l = ''
while not shared_queue.empty():
l = l + shared_queue.get()
print(l)
print( '%.4f' %(time.time()-t))
pool.close()
pool.join()
def dowork2(queue,shared_queue):
while True:
path = queue.get()
shared_queue.put(path[-1:])
if __name__ == '__main__':
t = Test()
t.start()
# t.join()
# t.run()
I managed to get it work by moving your dowork2 outside the class. If you declare dowork2 as a function before Test class and call it as
results.append(pool.apply_async(dowork2, (queue, shared_queue)))
it works as expected. I am not 100% sure but it probably goes wrong because your Test class is already subclassing Process. Now when your pool creates a subprocess and initialises the same class in the subprocess, something gets overridden somewhere.
Overall I wonder if Pool is really what you want to use here. Your worker seems to be in an infinite loop indicating you do not expect a return value from the worker, only the result in the return queue. If this is the case, you can remove Pool.
I also managed to get it work keeping your worker function within the class when I scrapped the Pool and replaced with another subprocess:
foo = mp.Process(group=None, target=self.dowork2, args=(queue, shared_queue))
foo.start()
# results.append(pool.apply_async(Test.dowork2, (queue, shared_queue)))
while True:
....
(you need to add self to your worker, though, or declare it as a static method:)
def dowork2(self, queue, shared_queue):

AssertionError: can only start a process object created by current process

Currently I have 3 Process A,B,C created under main process. However, I would like to start B and C in Process A. Is that possible?
process.py
from multiprocessing import Process
procs = {}
import time
def test():
print(procs)
procs['B'].start()
procs['C'].start()
time.sleep(8)
procs['B'].terminate()
procs['C'].termiante()
procs['B'].join()
procs['C'].join()
def B():
while True:
print('+'*10)
time.sleep(1)
def C():
while True:
print('-'*10)
time.sleep(1)
procs['A'] = Process(target = test)
procs['B'] = Process(target = B)
procs['C'] = Process(target = C)
main.py
from process import *
print(procs)
procs['A'].start()
procs['A'].join()
And I got error
AssertionError: can only start a process object created by current process
Are there any alternative way to start process B and C in A? Or let A send signal to ask master process start B and C
I would recommend using Event objects to do the synchronization. They permit to trigger some actions across the processes. For instance
from multiprocessing import Process, Event
import time
procs = {}
def test():
print(procs)
# Will let the main process know that it needs
# to start the subprocesses
procs['B'][1].set()
procs['C'][1].set()
time.sleep(3)
# This will trigger the shutdown of the subprocess
# This is cleaner than using terminate as it allows
# you to clean up the processes if needed.
procs['B'][1].set()
procs['C'][1].set()
def B():
# Event will be set once again when this process
# needs to finish
event = procs["B"][1]
event.clear()
while not event.is_set():
print('+' * 10)
time.sleep(1)
def C():
# Event will be set once again when this process
# needs to finish
event = procs["C"][1]
event.clear()
while not event.is_set():
print('-' * 10)
time.sleep(1)
if __name__ == '__main__':
procs['A'] = (Process(target=test), None)
procs['B'] = (Process(target=B), Event())
procs['C'] = (Process(target=C), Event())
procs['A'][0].start()
# Wait for events to be set before starting the subprocess
procs['B'][1].wait()
procs['B'][0].start()
procs['C'][1].wait()
procs['C'][0].start()
# Join all the subprocess in the process that created them.
procs['A'][0].join()
procs['B'][0].join()
procs['C'][0].join()
note that this code is not really clean. Only one event is needed in this case. But you should get the main idea.
Also, the process A is not needed anymore, you could consider using callbacks instead. See for instance the concurrent.futures module if you want to chain some async actions.

python multithreading wait till all threads finished

This may have been asked in a similar context but I was unable to find an answer after about 20 minutes of searching, so I will ask.
I have written a Python script (lets say: scriptA.py) and a script (lets say scriptB.py)
In scriptB I want to call scriptA multiple times with different arguments, each time takes about an hour to run, (its a huge script, does lots of stuff.. don't worry about it) and I want to be able to run the scriptA with all the different arguments simultaneously, but I need to wait till ALL of them are done before continuing; my code:
import subprocess
#setup
do_setup()
#run scriptA
subprocess.call(scriptA + argumentsA)
subprocess.call(scriptA + argumentsB)
subprocess.call(scriptA + argumentsC)
#finish
do_finish()
I want to do run all the subprocess.call() at the same time, and then wait till they are all done, how should I do this?
I tried to use threading like the example here:
from threading import Thread
import subprocess
def call_script(args)
subprocess.call(args)
#run scriptA
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
But I do not think this is right.
How do I know they have all finished running before going to my do_finish()?
Put the threads in a list and then use the Join method
threads = []
t = Thread(...)
threads.append(t)
...repeat as often as necessary...
# Start all threads
for x in threads:
x.start()
# Wait for all of them to finish
for x in threads:
x.join()
You need to use join method of Thread object in the end of the script.
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
t1.join()
t2.join()
t3.join()
Thus the main thread will wait till t1, t2 and t3 finish execution.
In Python3, since Python 3.2 there is a new approach to reach the same result, that I personally prefer to the traditional thread creation/start/join, package concurrent.futures: https://docs.python.org/3/library/concurrent.futures.html
Using a ThreadPoolExecutor the code would be:
from concurrent.futures.thread import ThreadPoolExecutor
import time
def call_script(ordinal, arg):
print('Thread', ordinal, 'argument:', arg)
time.sleep(2)
print('Thread', ordinal, 'Finished')
args = ['argumentsA', 'argumentsB', 'argumentsC']
with ThreadPoolExecutor(max_workers=2) as executor:
ordinal = 1
for arg in args:
executor.submit(call_script, ordinal, arg)
ordinal += 1
print('All tasks has been finished')
The output of the previous code is something like:
Thread 1 argument: argumentsA
Thread 2 argument: argumentsB
Thread 1 Finished
Thread 2 Finished
Thread 3 argument: argumentsC
Thread 3 Finished
All tasks has been finished
One of the advantages is that you can control the throughput setting the max concurrent workers.
To use multiprocessing instead, you can use ProcessPoolExecutor.
I prefer using list comprehension based on an input list:
inputs = [scriptA + argumentsA, scriptA + argumentsB, ...]
threads = [Thread(target=call_script, args=(i)) for i in inputs]
[t.start() for t in threads]
[t.join() for t in threads]
You can have class something like below from which you can add 'n' number of functions or console_scripts you want to execute in parallel passion and start the execution and wait for all jobs to complete..
from multiprocessing import Process
class ProcessParallel(object):
"""
To Process the functions parallely
"""
def __init__(self, *jobs):
"""
"""
self.jobs = jobs
self.processes = []
def fork_processes(self):
"""
Creates the process objects for given function deligates
"""
for job in self.jobs:
proc = Process(target=job)
self.processes.append(proc)
def start_all(self):
"""
Starts the functions process all together.
"""
for proc in self.processes:
proc.start()
def join_all(self):
"""
Waits untill all the functions executed.
"""
for proc in self.processes:
proc.join()
def two_sum(a=2, b=2):
return a + b
def multiply(a=2, b=2):
return a * b
#How to run:
if __name__ == '__main__':
#note: two_sum, multiply can be replace with any python console scripts which
#you wanted to run parallel..
procs = ProcessParallel(two_sum, multiply)
#Add all the process in list
procs.fork_processes()
#starts process execution
procs.start_all()
#wait until all the process got executed
procs.join_all()
I just came across the same problem where I needed to wait for all the threads which were created using the for loop.I just tried out the following piece of code.It may not be the perfect solution but I thought it would be a simple solution to test:
for t in threading.enumerate():
try:
t.join()
except RuntimeError as err:
if 'cannot join current thread' in err:
continue
else:
raise
From the threading module documentation
There is a “main thread” object; this corresponds to the initial
thread of control in the Python program. It is not a daemon thread.
There is the possibility that “dummy thread objects” are created.
These are thread objects corresponding to “alien threads”, which are
threads of control started outside the threading module, such as
directly from C code. Dummy thread objects have limited functionality;
they are always considered alive and daemonic, and cannot be join()ed.
They are never deleted, since it is impossible to detect the
termination of alien threads.
So, to catch those two cases when you are not interested in keeping a list of the threads you create:
import threading as thrd
def alter_data(data, index):
data[index] *= 2
data = [0, 2, 6, 20]
for i, value in enumerate(data):
thrd.Thread(target=alter_data, args=[data, i]).start()
for thread in thrd.enumerate():
if thread.daemon:
continue
try:
thread.join()
except RuntimeError as err:
if 'cannot join current thread' in err.args[0]:
# catchs main thread
continue
else:
raise
Whereupon:
>>> print(data)
[0, 4, 12, 40]
Maybe, something like
for t in threading.enumerate():
if t.daemon:
t.join()
using only join can result in false-possitive interaction with thread. Like said in docs :
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in
seconds (or fractions thereof). As join() always returns None, you
must call isAlive() after join() to decide whether a timeout happened
– if the thread is still alive, the join() call timed out.
and illustrative piece of code:
threads = []
for name in some_data:
new = threading.Thread(
target=self.some_func,
args=(name,)
)
threads.append(new)
new.start()
over_threads = iter(threads)
curr_th = next(over_threads)
while True:
curr_th.join()
if curr_th.is_alive():
continue
try:
curr_th = next(over_threads)
except StopIteration:
break

Categories

Resources