How can I create non-blocking threads? - python

I have been trying to use Threads in python. I am working on a Pi hardware project.
Here's the problem:
When I create a thread, and call it like this, the loop keeps creating new threads before the old ones are completed. Hence, slowing the program down... (printing 'threading.active_count' displays 20+ active threads).
while True:
t4 = Thread(target = myFunc, args=())
t4.start()
print("Hello World")
I need a threading process that runs the same function over and over on a SINGLE thread without affecting or delaying my main program. i.e. when a thread has completed executing the function, run it again... but my main should still be printing "Hello World" as normal.
I've found one way to stop it crashing, which is to sit and "wait" until the thread is finished, and then start again. However, this is a blocking approach, and completely defeats the purpose of threading.
while True:
t4 = Thread(target = myFunc, args=())
t4.start()
t4.join()
print("Hello World")
Any suggestions?

You can use a multiprocessing.pool.ThreadPool to manage both the starting of new threads and limiting the maximum number of them executing concurrently.
from multiprocessing.pool import ThreadPool
from random import randint
import threading
import time
MAX_THREADS = 5 # Number of threads that can run concurrently.
print_lock = threading.Lock() # Prevent overlapped printing from threads.
def myFunc():
time.sleep(random.uniform(0, 1)) # Pause a variable amount of time.
with print_lock:
print('myFunc')
def test():
pool = ThreadPool(processes=MAX_THREADS)
for _ in range(100): # Submit as many tasks as desired.
pool.apply_async(myFunc, args=())
pool.close() # Done adding tasks.
pool.join() # Wait for all tasks to complete.
print('done')
if __name__ == '__main__':
test()

I need a threading process that runs the same function over and over on a SINGLE thread
This snippet creates a single thread that continually calls myFunc().
def threadMain() :
while True :
myFunc()
t4 = Thread(target = threadMain, args=())
t4.start()

setDaemon(True) from threading.Thread class more here https://docs.python.org/2/library/threading.html#threading.Thread.daemon

Make a delegate thread - i.e. a thread to run your other threads in sequence:
def delegate(*args):
while True:
t = Thread(target=myFunc, args=args) # or just call myFunc(*args) instead of a thread
t.start()
t.join()
t = Thread(target=delegate, args=())
t.start()
while True:
print("Hello world!")
Or even better, redesign your myFunc() to run its logic within a while True: ... loop and start the thread only once.
I'd also advise you to add some sort of a delay (e.g. time.sleep()) if you're not performing any work in your threads to help with context switching.

Related

run threads multiple times within process python

Problem description:
I am working with the simulator to extract some dataset from it. The idea is to run multiple processes to perform various task. For example: moving the vehicle using one process and data collection using another process. In the data collection process, 3 threads are running to record three different data types and the recording has to occur periodically. Also, the data should be recorded synchronously.
Sample code is provided without details.
import threading
import multiprocessing
import time
class DataRecorder:
def __init__(self):
"""
some parameters
"""
pass
def move_vehicle(self, path):
pass
def record_data1(self):
pass
def record_data2(self):
pass
def record_data3(self):
pass
def record_data():
t1 = threading.Thread(target=self.record_data1)
t2 = threading.Thread(target=self.record_data2)
t3 = threading.Thread(target=self.record_data3)
threads = [t1, t2, t3]
for thread in threads:
thread.start()
while (True):
for thread in threads:
if not thread.is_alive()
thread.start() # leads to threads can only be started once
for thread in threads:
if thread.is_alive()
thread.join()
time.sleep(1)
def stop_recording(self, p1):
if p1.is_alive():
p1.terminate()
def move_and_record():
P1 = multiprocessing.Process(target=self.record_data)
P1.start()
self.move_vehicle(path)
self.stop_recording(P1)
The problem:
RuntimeError: threads can only be started once.
And inside while loop, the threads gets stopped after 1st iteration. I have tried both without .join() part and with .join() part.
I am also looking for another alternative to solve this problem.

Threads are not doing their job when there is a print instruction [duplicate]

I have the following script (don't refer to the contents):
import _thread
def func1(arg1, arg2):
print("Write to CLI")
def verify_result():
func1()
for _ in range (4):
_thread.start_new_thread(func1, (DUT1_CLI, '0'))
verify_result()
I want to concurrently execute (say 4 threads) func1() which in my case includes a function call that can take time to execute. Then, only after the last thread finished its work I want to execute verify_result().
Currently, the result I get is that all threads finish their job, but verify_result() is executed before all threads finish their job.
I have even tried to use the following code (of course I imported threading) under the for loop but that didn't do the work (don't refer to the arguments)
t = threading.Thread(target = Enable_WatchDog, args = (URL_List[x], 180, Terminal_List[x], '0'))
t.start()
t.join()
Your last threading example is close, but you have to collect the threads in a list, start them all at once, then wait for them to complete all at once. Here's a simplified example:
import threading
import time
# Lock to serialize console output
output = threading.Lock()
def threadfunc(a,b):
for i in range(a,b):
time.sleep(.01) # sleep to make the "work" take longer
with output:
print(i)
# Collect the threads
threads = []
for i in range(10,100,10):
# Create 9 threads counting 10-19, 20-29, ... 90-99.
thread = threading.Thread(target=threadfunc,args=(i,i+10))
threads.append(thread)
# Start them all
for thread in threads:
thread.start()
# Wait for all to complete
for thread in threads:
thread.join()
Say you have a list of threads.
You loop(each_thread) over them -
for each_thread in thread_pool:
each_thread.start()
within the loop to start execution of the run function within each thread.
The same way, you write another loop after you start all threads and have
for each_thread in thread_pool:
each_thread.join()
what join does is that it will wait for thread i to finish execution before letting i+1th thread to finish execution.
The threads would run concurrently, join() would just synchronize the way each thread returns its results.
In your case specifically, you can the join() loop and the run verify_result() function.

Python threading: wait for thread to stop then execute function

I'm trying to run a function after my thread has completed but the function is not called. Code structure:
class():
def functiontocall() # uses data calculated in thread for plotting. only works when thread is complete
do something with self.A
def watchthread():
thread()
functiontocall()
# since this function depends on variable A, it throws an error.
# I tried: if thread.join == True: functiontocall but this did not call the function.
def thread():
def run():
pythoncom.CoInitialize()
--- do stuff --
for i in 1000:
thousands of calculations while updating state in GUI ---
A = result
self.A = A
thread = threading.Thread(target=run)
thread.start()
note: i removed 'self' for simplicity.
thread.join should tell me when the thread has finished but for some reason i still cant run the functiontocall.
Is this a bad way of organizing threads in general?
Edit: I can call the function after the thread is finished but I cannot access variables when the thread is running. e.g. 0-100% progress for a progress bar in my GUI. when I use:
def watchthread():
thread()
thread.join()
functiontocall()
I cannot update the status of the thread in my GUI. It just waits until the calculations are finished then runs functiontocall().
Because you're using threads, once the thread had started Python will move onto the next thing, it will not wait for the thread to finish unless you've asked it to.
With your code, if you want to wait for the thread function to finish before moving on then it doesn't sound like you need threading, a normal function would run, complete, and then Python will move onto running functiontocall()
If there's a reason you need to use threads which isn't coming across in the example then I would suggest using a thread.join()
threads = [] # list to hold threads if you have more than one
t = threading.Thread(target=run)
threads.append(t)
for thread in threads: # wait for all threads to finish
thread.join()
functiontocall() # will only run after all threads are done
Again, I'd suggest relooking at whether threads is what you need to use here as it doesn't seem apparent.
To update this answer based on the new information, this may be the way you want to have a variable be accessible. In this case the threads would all update your class variable A, your GUI update function also reads this periodically and updates your GUI.
class ThisClass():
def __init__(self):
self.A = 0
def function_to_call(self):
while self.A != 100: # assuming this is a progress bar to 100%
# update in GUI
def run(self):
# does calculations
with lock: # to prevent issues with threads accessing variable at the same time
self.A += calculations
def progress(self):
threads = [] # list to hold threads if you have more than one
t = threading.Thread(target=run)
threads.append(t)
f = threading.Thread(target=self.function_to_call)
threads.append(f)
for thread in threads:
thread.start()
for thread in threads: # wait for all threads to finish
thread.join()

The workers in ThreadPoolExecutor is not really daemon

The thing I cannot figure out is that although ThreadPoolExecutor uses daemon workers, they will still run even if main thread exit.
I can provide a minimal example in python3.6.4:
import concurrent.futures
import time
def fn():
while True:
time.sleep(5)
print("Hello")
thread_pool = concurrent.futures.ThreadPoolExecutor()
thread_pool.submit(fn)
while True:
time.sleep(1)
print("Wow")
Both main thread and the worker thread are infinite loops. So if I use KeyboardInterrupt to terminate main thread, I expect that the whole program will terminate too. But actually the worker thread is still running even though it is a daemon thread.
The source code of ThreadPoolExecutor confirms that worker threads are daemon thread:
t = threading.Thread(target=_worker,
args=(weakref.ref(self, weakref_cb),
self._work_queue))
t.daemon = True
t.start()
self._threads.add(t)
Further, if I manually create a daemon thread, it works like a charm:
from threading import Thread
import time
def fn():
while True:
time.sleep(5)
print("Hello")
thread = Thread(target=fn)
thread.daemon = True
thread.start()
while True:
time.sleep(1)
print("Wow")
So I really cannot figure out this strange behavior.
Suddenly... I found why. According to much more source code of ThreadPoolExecutor:
# Workers are created as daemon threads. This is done to allow the interpreter
# to exit when there are still idle threads in a ThreadPoolExecutor's thread
# pool (i.e. shutdown() was not called). However, allowing workers to die with
# the interpreter has two undesirable properties:
# - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g.
# writing to a file.
#
# To work around this problem, an exit handler is installed which tells the
# workers to exit when their work queues are empty and then waits until the
# threads finish.
_threads_queues = weakref.WeakKeyDictionary()
_shutdown = False
def _python_exit():
global _shutdown
_shutdown = True
items = list(_threads_queues.items())
for t, q in items:
q.put(None)
for t, q in items:
t.join()
atexit.register(_python_exit)
There is an exit handler which will join all unfinished worker...
Here's the way to avoid this problem. Bad design can be beaten by another bad design. People write daemon=True only if they really know that the worker won't damage any objects or files.
In my case, I created TreadPoolExecutor with a single worker and after a single submit I just deleted the newly created thread from the queue so the interpreter won't wait till this thread stops on its own. Notice that worker threads are created after submit, not after the initialization of TreadPoolExecutor.
import concurrent.futures.thread
from concurrent.futures import ThreadPoolExecutor
...
executor = ThreadPoolExecutor(max_workers=1)
future = executor.submit(lambda: self._exec_file(args))
del concurrent.futures.thread._threads_queues[list(executor._threads)[0]]
It works in Python 3.8 but may not work in 3.9+ since this code is accessing private variables.
See the working piece of code on github

Will different threads end at the same time as the first one to finish?

I'm new to thread in python, i have a question that, supposed i start 3 threads like below, each one takes care of 1 different task:
def start( taskName, delay):
// do somthing with each taskName
# Create two threads as follows
try:
thread.start_new_thread( start, ("task1", ) )
thread.start_new_thread( start, ("task2", ) )
thread.start_new_thread( start, ("task3", ) )
except:
print "Error: unable to start thread"
Supposed that for each "start", it takes around 10-15 seconds to finish depending on each taskName it is. My question is that, if task 1 finishes in 12 seconds, tasks 2 in 10secs and task 3 in 15 seconds. Will task 2 finish then close and leave task 1 and task 3 to run till finish, or will task 2 force task 1 and 3 to close after task 2 is finished?
Are there any arguments that we can pass to the start_new_thread method in order to archive 2 of the things that I have mentioned above:
1. First to finish forces the rest to close.
2. Each one finish individually.
Thank you
As Max Noel already mentioned, it is advised to use the Thread class instead of using start_new_thread.
Now, as for your two questions:
1. First to finish forces the rest to close
You will need two important things: a shared queue that the threads can put their ID in once they are done. And a shared Event that will signal all threads to stop working when it is triggered. The main thread will wait for the first thread to put something in the queue and will then trigger the event to stop all threads.
import threading
import random
import time
import Queue
def work(worker_queue, id, stop_event):
while not stop_event.is_set():
print "This is worker", id
# do stuff
time.sleep(random.random() * 5)
# put worker ID in queue
if not stop_event.is_set():
worker_queue.put(id)
break
# queue for workers
worker_queue = Queue.Queue()
# indicator for other threads to stop
stop_event = threading.Event()
# run workers
threads = []
threads.append(threading.Thread(target=work, args=(worker_queue, 0, stop_event)))
threads.append(threading.Thread(target=work, args=(worker_queue, 1, stop_event)))
threads.append(threading.Thread(target=work, args=(worker_queue, 2, stop_event)))
for thread in threads:
thread.start()
# this will block until the first element is in the queue
first_finished = worker_queue.get()
print first_finished, 'was first!'
# signal the rest to stop working
stop_event.set()
2. Each finish individually
Now this is much easier. Just call the join method on all Thread objects. This will wait for each thread to finish.
for thread in threads:
thread.start()
for thread in threads:
thread.join()
Btw, the above code is for Python 2.7. Let me know if you need Python 3
First off, don't use start_new_thread, it's a low-level primitive. Use the Thread class in the threading module instead.
Once you have that, Thread instances have a .join() method, which you can call from another thread (your program's main thread) to wait for them to terminate.
t1 = Thread(target=my_func)
t1.start()
# Waits for t1 to finish.
t1.join()
All threads will terminate when the process terminates.
Thus, if your main program ends after the try..except, then all three threads may get terminated prematurely. For example:
import thread
import logging
import time
logger = logging.getLogger(__name__)
def start(taskname, n):
for i in range(n):
logger.info('{}'.format(i))
time.sleep(0.1)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
try:
thread.start_new_thread( start, ("task1", 10) )
thread.start_new_thread( start, ("task2", 5) )
thread.start_new_thread( start, ("task3", 8) )
except Exception as err:
logger.exception(err)
may print something like
[14:15:16 Dummy-3] 0
[14:15:16 Dummy-1] 0
In contrast, if you place
time.sleep(5)
at the end of the script, then you see the full expected output from all three
threads.
Note also that the thread module is a low-level module; unless you have a
particular reason for using it, most often people use the threading module which
implements more useful features for dealing with threads, such as a join
method which blocks until the thread has finished. See below for an example.
The docs state:
When the function returns, the thread silently exits.
When the function terminates with an unhandled exception, a stack trace is
printed and then the thread exits (but other threads continue to run).
Thus, by default, when one thread finishes, the others continue to run.
The example above also demonstrates this.
To make all the threads exit when one function finishes is more difficult.
One thread can not kill another thread cleanly (e.g., without killing the entire
process.)
Using threading, you could arrange for the threads to set a variable
(e.g. flag) to True when finished, and have each thread check the state of
flag periodically and quit if it is True. But note that the other threads will
not necessarily terminate immediately; they will only terminate when they next
check the state of flag. If a thread is blocked, waiting for I/O for instance,
then it may not check the flag for a considerable amount of time (if ever!).
However, if the thread spends most of its time in a quick loop, you could check the state of flag once per iteration:
import threading
import logging
import time
logger = logging.getLogger(__name__)
def start(taskname, n):
global flag
for i in range(n):
if flag:
break
logger.info('{}'.format(i))
time.sleep(0.1)
else:
# get here if loop finishes without breaking
logger.info('FINISHED')
flag = True
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
threads = list()
flag = False
try:
threads.append(threading.Thread(target=start, args=("task1", 10) ))
threads.append(threading.Thread(target=start, args=("task2", 5) ))
threads.append(threading.Thread(target=start, args=("task3", 8) ))
except Exception as err:
logger.exception(err)
for t in threads:
t.start()
for t in threads:
# make the main process wait until all threads have finished.
t.join()

Categories

Resources