I am learning about threading using Python 3.8.2. I have one function with an infinite loop, and then two other functions that use the threading.Timer class.
def x:
while True:
dosomething()
def f1():
dosomething2()
threading.Timer(60,f1).start()
def f2():
dosomething3()
threading.Timer(100,f2).start()
Then I start three threads:
t1 = threading.Thread(target=x)
t2 = threading.Thread(target=f1)
t3 = threading.Thread(target=f2)
When it comes time to execute the f1 or f2, I don't want the x() to be executing at the same time, (they might be using the same resource) and pause, let f2 or f1 finish, and then resume the infinite loop in x(). How can I do this?
I've looked at join() but it seems to me it will wait forever for f1() and f2() because it creates a new thread every time and won't terminate.
Here's a photo to explain the flow:
Here's a possible solution, I added the functions dosomething() dosomething2() dosomething3() into the code to have a working example. I've also changed the timer on the threads to 6 and 10 seconds each instead of 60 and 100 so we don't have to wait that long to see their functionality.
dosomething()
prints 'dosomething is running' every second if no other function is running
dosomething2()
sets dosomething2.status = 'run'
prints 'dosomething2 is running 1st second'
waits one second
prints 'dosomething2 is running 2nd second'
sets dosomething2.status = 'sleep'
dosomething3()
sets dosomething3.status = 'run'
prints 'dosomething3 is running 1st second'
waits one second
prints 'dosomething3 is running 2nd second'
waits one second
prints 'dosomething3 is running 3rd second'
sets dosomething3.status = 'sleep'
The first and last lines in dosomething2() and dosomething3() will be triggers to let our x() function know to run only if both functions are in the state of 'sleep'.
You could use global variables instead of dosomething2.status and dosomething3.status but some people recommend not using them.
Code
import time
import threading
def dosomething():
print('dosomething is running')
time.sleep(1)
def dosomething2():
dosomething2.status = 'run'
print('\tdosomething2 is running 1st second')
time.sleep(1)
print('\tdosomething2 is running 2nd second')
dosomething2.status = 'sleep'
def dosomething3():
dosomething3.status = 'run'
print('\tdosomething3 is running 1st second')
time.sleep(1)
print('\tdosomething3 is running 2nd second')
time.sleep(1)
print('\tdosomething3 is running 3rd second')
dosomething3.status = 'sleep'
dosomething2.status = ''
dosomething3.status = ''
def x():
while True:
if dosomething2.status == 'sleep' and dosomething3.status == 'sleep':
dosomething()
def f1():
dosomething2()
threading.Timer(6,f1).start()
def f2():
dosomething3()
threading.Timer(10,f2).start()
t1 = threading.Thread(target=x)
t2 = threading.Thread(target=f1)
t3 = threading.Thread(target=f2)
t1.start()
t2.start()
t3.start()
I am trying to use while loops inside threads for a bigger project. For simplicity I created an easier example to test it, but it doesn`t work.
My goal is to control the thread for the main function and when the variable go_thread_one is switched to False the thread should end. At the moment the second thread is not being used and only the first thread print its text.
How can I fix this error?
Below is the simplified version of my code:
import time
from threading import Thread
go_thread_one = True
def first_thread():
while go_thread_one:
print('Thread 1')
time.sleep(0.5)
def second_thread():
print('Thread 2')
if __name__ == "__main__":
t1 = Thread(target=first_thread())
t2 = Thread(target=second_thread())
t1.daemon = True
t2.daemon = True
t1.start()
t2.start()
time.sleep(2)
go_thread_one = False
print("end main Thread")
First of all, there is a problem in these lines:
t1 = Thread(target=first_thread())
t2 = Thread(target=second_thread())
You should pass a callable object to the Thread, but instead you call a function and pass its result. So you don't even create a t1, but go inside first_thread function and loop there forever.
To fix this, change Thread creation to:
t1 = Thread(target=first_thread)
t2 = Thread(target=second_thread)
Next, the
go_thread_one = False
will not give a desired effect – main thread will finish after time.sleep(2) even without this line.
To deal with it, you can add
t1.join()
t2.join()
I'm trying to run multiple functions with multiprocessing and running into a bit of wall. I want to run an initial function to completion on all processes/inputs and then run 2 or 3 other functions in parallel on the output of the first function. I've already got my search function. the code is there for the sake of explanation.
I'm not sure how to continue the code from here. I've put my initial attempt below. I want all instances of process1 to finish and then process2 and process3 to start in parallel.
Code is something like:
from multiprocessing import Pool
def init(*args):
global working_dir
[working_dir] = args
def process1(InFile):
python.DoStuffWith.InFile
Output.save.in(working_dir)
def process2(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def process3(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def MCprocess():
working_dir = input("enter input: ")
inputfiles1 = []
python.searchfunction.appendfilesin.working_dir.to.inputfiles1
with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
pool.map(process1, inputfiles1)
pool.close()
#Editted Code
queue = multiprocessing.Queue
queue.put(working_dir)
queue.put(working_dir)
ProcessTwo = multiprocessing.Process(target=process2, args=(queue,))
ProcessThree = multiprocessing.Process(target=process3, args=(queue,))
ProcessTwo.start()
ProcessThree.start()
#OLD CODE
#with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
#pool.map_async(process2)
#pool.map_async(process3)
if __name__ == '__main__':
MCprocess()
Your best bet is to use an Event. The first process calls event.set() when it is done to indicate that the event has happened. The waiting processes use event.wait() or one of its variants to wait to be awoken that the event has been set.
Currently I have 3 Process A,B,C created under main process. However, I would like to start B and C in Process A. Is that possible?
process.py
from multiprocessing import Process
procs = {}
import time
def test():
print(procs)
procs['B'].start()
procs['C'].start()
time.sleep(8)
procs['B'].terminate()
procs['C'].termiante()
procs['B'].join()
procs['C'].join()
def B():
while True:
print('+'*10)
time.sleep(1)
def C():
while True:
print('-'*10)
time.sleep(1)
procs['A'] = Process(target = test)
procs['B'] = Process(target = B)
procs['C'] = Process(target = C)
main.py
from process import *
print(procs)
procs['A'].start()
procs['A'].join()
And I got error
AssertionError: can only start a process object created by current process
Are there any alternative way to start process B and C in A? Or let A send signal to ask master process start B and C
I would recommend using Event objects to do the synchronization. They permit to trigger some actions across the processes. For instance
from multiprocessing import Process, Event
import time
procs = {}
def test():
print(procs)
# Will let the main process know that it needs
# to start the subprocesses
procs['B'][1].set()
procs['C'][1].set()
time.sleep(3)
# This will trigger the shutdown of the subprocess
# This is cleaner than using terminate as it allows
# you to clean up the processes if needed.
procs['B'][1].set()
procs['C'][1].set()
def B():
# Event will be set once again when this process
# needs to finish
event = procs["B"][1]
event.clear()
while not event.is_set():
print('+' * 10)
time.sleep(1)
def C():
# Event will be set once again when this process
# needs to finish
event = procs["C"][1]
event.clear()
while not event.is_set():
print('-' * 10)
time.sleep(1)
if __name__ == '__main__':
procs['A'] = (Process(target=test), None)
procs['B'] = (Process(target=B), Event())
procs['C'] = (Process(target=C), Event())
procs['A'][0].start()
# Wait for events to be set before starting the subprocess
procs['B'][1].wait()
procs['B'][0].start()
procs['C'][1].wait()
procs['C'][0].start()
# Join all the subprocess in the process that created them.
procs['A'][0].join()
procs['B'][0].join()
procs['C'][0].join()
note that this code is not really clean. Only one event is needed in this case. But you should get the main idea.
Also, the process A is not needed anymore, you could consider using callbacks instead. See for instance the concurrent.futures module if you want to chain some async actions.
This may have been asked in a similar context but I was unable to find an answer after about 20 minutes of searching, so I will ask.
I have written a Python script (lets say: scriptA.py) and a script (lets say scriptB.py)
In scriptB I want to call scriptA multiple times with different arguments, each time takes about an hour to run, (its a huge script, does lots of stuff.. don't worry about it) and I want to be able to run the scriptA with all the different arguments simultaneously, but I need to wait till ALL of them are done before continuing; my code:
import subprocess
#setup
do_setup()
#run scriptA
subprocess.call(scriptA + argumentsA)
subprocess.call(scriptA + argumentsB)
subprocess.call(scriptA + argumentsC)
#finish
do_finish()
I want to do run all the subprocess.call() at the same time, and then wait till they are all done, how should I do this?
I tried to use threading like the example here:
from threading import Thread
import subprocess
def call_script(args)
subprocess.call(args)
#run scriptA
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
But I do not think this is right.
How do I know they have all finished running before going to my do_finish()?
Put the threads in a list and then use the Join method
threads = []
t = Thread(...)
threads.append(t)
...repeat as often as necessary...
# Start all threads
for x in threads:
x.start()
# Wait for all of them to finish
for x in threads:
x.join()
You need to use join method of Thread object in the end of the script.
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
t1.join()
t2.join()
t3.join()
Thus the main thread will wait till t1, t2 and t3 finish execution.
In Python3, since Python 3.2 there is a new approach to reach the same result, that I personally prefer to the traditional thread creation/start/join, package concurrent.futures: https://docs.python.org/3/library/concurrent.futures.html
Using a ThreadPoolExecutor the code would be:
from concurrent.futures.thread import ThreadPoolExecutor
import time
def call_script(ordinal, arg):
print('Thread', ordinal, 'argument:', arg)
time.sleep(2)
print('Thread', ordinal, 'Finished')
args = ['argumentsA', 'argumentsB', 'argumentsC']
with ThreadPoolExecutor(max_workers=2) as executor:
ordinal = 1
for arg in args:
executor.submit(call_script, ordinal, arg)
ordinal += 1
print('All tasks has been finished')
The output of the previous code is something like:
Thread 1 argument: argumentsA
Thread 2 argument: argumentsB
Thread 1 Finished
Thread 2 Finished
Thread 3 argument: argumentsC
Thread 3 Finished
All tasks has been finished
One of the advantages is that you can control the throughput setting the max concurrent workers.
To use multiprocessing instead, you can use ProcessPoolExecutor.
I prefer using list comprehension based on an input list:
inputs = [scriptA + argumentsA, scriptA + argumentsB, ...]
threads = [Thread(target=call_script, args=(i)) for i in inputs]
[t.start() for t in threads]
[t.join() for t in threads]
You can have class something like below from which you can add 'n' number of functions or console_scripts you want to execute in parallel passion and start the execution and wait for all jobs to complete..
from multiprocessing import Process
class ProcessParallel(object):
"""
To Process the functions parallely
"""
def __init__(self, *jobs):
"""
"""
self.jobs = jobs
self.processes = []
def fork_processes(self):
"""
Creates the process objects for given function deligates
"""
for job in self.jobs:
proc = Process(target=job)
self.processes.append(proc)
def start_all(self):
"""
Starts the functions process all together.
"""
for proc in self.processes:
proc.start()
def join_all(self):
"""
Waits untill all the functions executed.
"""
for proc in self.processes:
proc.join()
def two_sum(a=2, b=2):
return a + b
def multiply(a=2, b=2):
return a * b
#How to run:
if __name__ == '__main__':
#note: two_sum, multiply can be replace with any python console scripts which
#you wanted to run parallel..
procs = ProcessParallel(two_sum, multiply)
#Add all the process in list
procs.fork_processes()
#starts process execution
procs.start_all()
#wait until all the process got executed
procs.join_all()
I just came across the same problem where I needed to wait for all the threads which were created using the for loop.I just tried out the following piece of code.It may not be the perfect solution but I thought it would be a simple solution to test:
for t in threading.enumerate():
try:
t.join()
except RuntimeError as err:
if 'cannot join current thread' in err:
continue
else:
raise
From the threading module documentation
There is a “main thread” object; this corresponds to the initial
thread of control in the Python program. It is not a daemon thread.
There is the possibility that “dummy thread objects” are created.
These are thread objects corresponding to “alien threads”, which are
threads of control started outside the threading module, such as
directly from C code. Dummy thread objects have limited functionality;
they are always considered alive and daemonic, and cannot be join()ed.
They are never deleted, since it is impossible to detect the
termination of alien threads.
So, to catch those two cases when you are not interested in keeping a list of the threads you create:
import threading as thrd
def alter_data(data, index):
data[index] *= 2
data = [0, 2, 6, 20]
for i, value in enumerate(data):
thrd.Thread(target=alter_data, args=[data, i]).start()
for thread in thrd.enumerate():
if thread.daemon:
continue
try:
thread.join()
except RuntimeError as err:
if 'cannot join current thread' in err.args[0]:
# catchs main thread
continue
else:
raise
Whereupon:
>>> print(data)
[0, 4, 12, 40]
Maybe, something like
for t in threading.enumerate():
if t.daemon:
t.join()
using only join can result in false-possitive interaction with thread. Like said in docs :
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in
seconds (or fractions thereof). As join() always returns None, you
must call isAlive() after join() to decide whether a timeout happened
– if the thread is still alive, the join() call timed out.
and illustrative piece of code:
threads = []
for name in some_data:
new = threading.Thread(
target=self.some_func,
args=(name,)
)
threads.append(new)
new.start()
over_threads = iter(threads)
curr_th = next(over_threads)
while True:
curr_th.join()
if curr_th.is_alive():
continue
try:
curr_th = next(over_threads)
except StopIteration:
break