Python infinity threading loop without duplicate or interrupt thread task - python

I wrote this code to create a infinity threading loop without duplicate or interrupt thread task.
import threading
import time
thread = None
def loopMyTask():
global thread
if thread is not None and thread.isAlive():
thread.cancel()
thread.join()
thread = threading.Timer(6.0, loopMyTask)
thread.daemon = True
thread.start()
myTask()
def myTask():
# simulate a task
for i in range(14) :
print(str(i))
time.sleep(1)
while True:
loopMyTask()
Apparently it's working, but it returns an alert

I am not sure what you want to do, but only the main thread does some work here:
you call loopMyTask()
it sets a timer to start a new thread calling itself in 6s
it calls myTask, which prints 1, 2, 3, 4, 5...
the timer triggers a call to loopmytask() in a new thread.
the new thread finds that the global variable thread is set (and the thread is alive) so it calls cancel. That does nothing: it is meant to cancel the timer, but it already has finished. Indeed, this part of the code is running because the time has arrived
the new thread calls thread.join(), which would cause a deadlock since it would be waiting for itself to finish. Fortunately, the threading module prevents this kind of deadlocks and raises a RuntimeError. The thread dies.
the main thread resumes its execution of myTask, printing 6, 7, 8...
once it finishes, the loop goes again to step 1. The thread.join() call does not trigger an error this time, but everything repeats again.
So, you would get the same results (aside from the error) if you just call myTask() in a loop

Related

Threads are not doing their job when there is a print instruction [duplicate]

I have the following script (don't refer to the contents):
import _thread
def func1(arg1, arg2):
print("Write to CLI")
def verify_result():
func1()
for _ in range (4):
_thread.start_new_thread(func1, (DUT1_CLI, '0'))
verify_result()
I want to concurrently execute (say 4 threads) func1() which in my case includes a function call that can take time to execute. Then, only after the last thread finished its work I want to execute verify_result().
Currently, the result I get is that all threads finish their job, but verify_result() is executed before all threads finish their job.
I have even tried to use the following code (of course I imported threading) under the for loop but that didn't do the work (don't refer to the arguments)
t = threading.Thread(target = Enable_WatchDog, args = (URL_List[x], 180, Terminal_List[x], '0'))
t.start()
t.join()
Your last threading example is close, but you have to collect the threads in a list, start them all at once, then wait for them to complete all at once. Here's a simplified example:
import threading
import time
# Lock to serialize console output
output = threading.Lock()
def threadfunc(a,b):
for i in range(a,b):
time.sleep(.01) # sleep to make the "work" take longer
with output:
print(i)
# Collect the threads
threads = []
for i in range(10,100,10):
# Create 9 threads counting 10-19, 20-29, ... 90-99.
thread = threading.Thread(target=threadfunc,args=(i,i+10))
threads.append(thread)
# Start them all
for thread in threads:
thread.start()
# Wait for all to complete
for thread in threads:
thread.join()
Say you have a list of threads.
You loop(each_thread) over them -
for each_thread in thread_pool:
each_thread.start()
within the loop to start execution of the run function within each thread.
The same way, you write another loop after you start all threads and have
for each_thread in thread_pool:
each_thread.join()
what join does is that it will wait for thread i to finish execution before letting i+1th thread to finish execution.
The threads would run concurrently, join() would just synchronize the way each thread returns its results.
In your case specifically, you can the join() loop and the run verify_result() function.

Python - Threading - Make sure that Timer() stops correctly

I want to make sure that my timer correctly stops after running timer.cancel(), but I am not sure if I'm doing this correctly. To my knowledge, first you stop it by running cancel(), and then wait until the thread is completely cleaned up and terminated, using join(). If I run join() after canceling, any statements after join() will be executed only after the thread is completely terminated. Am I understanding this correctly?
If not, how do I make sure that my thread is terminated completely, and that my next lines of code will run only after the thread's termination?
def f():
timer = threading.Timer(5, f)
if something_happens:
timer.cancel()
timer.join()
do_something_after_timer_completely_stops()
You don't have to call .join(). Calling .cancel() is enough to stop the timer. However, there's a caveat: Timers can only be stopped if they are in the waiting stage (before the time expires). If the actual code is already running it can't be stopped anymore by .cancel(); It becomes a normal thread.
The way the threading.Timer() class is implemented uses a threading.Event instance that is waited upon, to allow cancelling the timer, however if the timer runs out the event is only set after the function has finished. So you can't use it to reliably detect if the thread has started. I suggest creating your own event object if you want to be notified that.
Example: You're creating a timer to call f:
timer = threading.Timer(5, f)
Instead, create a new event and a function to set it before calling f, and schedule your timer to call that new function you created.
f_called = threading.Event()
def calls_f(*args, **kwds):
"""function that calls f after setting the event f_called"""
f_called.set()
return f(*args, **kwds)
timer = threading.Timer(5, calls_f)
Then you can use that event to check if f was already called:
if f_called.is_set():
print("Too bad, thread is already running, can't cancel the timer!")

Python threading - blocking operation - terminate execution

I have a python program like this:
from threading import Thread
def foo():
while True:
blocking_function() #Actually waiting for a message on a socket
def run():
Thread(target=foo).start()
run()
This program does not terminate with KeyboardInterrupt, due to the main Thread exiting before a Thread running foo() has a chance to terminate. I tried keeping the main thread alive with just running while True loop after calling run() but that also doesn't exit the program (blocking_function() just blocks the thread from running I guess, waits for the message). Also tried catching KeyboardInterrupt exception in main thread and call sys.exit(0) - same outcome (I would actually expect it to kill the thread running foo(), but apparently it doesn't)
Now, I could simply timeout the execution of blocking_function() but that's no fun. Can I unblock it on KeyboardInterrupt or anything similar?
Main goal: Terminate the program with blocked thread on Ctrl+C
Maybe a little bit of a workaround, but you could use thread instead of threading. This is not really advised, but if it suits you and your program, why not.
You will need to keep your program running, otherwise the thread exits right after run()
import thread, time
def foo():
while True:
blocking_function() #Actually waiting for a message on a socket
def run():
thread.start_new_thread(foo, ())
run()
while True:
#Keep the main thread alive
time.sleep(1)

Difference between thread.join and thread.abort in python multithreading

I am new to python multi threading and trying to understand the basic difference between joining multiple worker threads and calling abort on them after I am done processing with them. Can somebody please explain me with an example?
.join() and setting a abort flags are two different steps in cleanly shutting down a thread.
join() just waits for a thread that is going to terminate anyway to be finished. Thus:
import threading
import time
def thread_main():
time.sleep(10)
t = threading.Thread(target=thread_main)
t.start()
t.join()
This is a reasonable program. The join just waits until the thread is finished. It doesn't do anything to make that happen, but the thread will terminate anyway, because it is just a 10 second sleep.
In contrast
import threading
import time
def thread_main():
while True:
time.sleep(10)
t = threading.Thread(target=thread_main)
t.start()
t.join()
Is not a good idea, because join will still wait for the thread to terminate on it's own. But the thread will never do that because it loops forever. Thus the whole program can't terminate.
That's the point where you want some kind of signaling to the thread for it so stop itself
import threading
import time
stop_thread = False
def thread_main():
while not stop_thread:
time.sleep(10)
t = threading.Thread(target=thread_main)
t.start()
stop_thread = True
t.join()
Here stop_thread takes the role of your __abort flag and signals the thread to stop after it has finished with it's latest work (the sleep(10) in this case)
Thus this program again is reasonable and terminates when asked to do.
Another popular way to signal a thread to stop when the thread uses a consumer pattern (i.e. gets its work from a queue) is to post a special 'terminate now' work item as alternative to setting a flag variable:
def thread_main():
while True:
(quit, data) = work_queue().get()
if quit: break
do_work(data)

Will different threads end at the same time as the first one to finish?

I'm new to thread in python, i have a question that, supposed i start 3 threads like below, each one takes care of 1 different task:
def start( taskName, delay):
// do somthing with each taskName
# Create two threads as follows
try:
thread.start_new_thread( start, ("task1", ) )
thread.start_new_thread( start, ("task2", ) )
thread.start_new_thread( start, ("task3", ) )
except:
print "Error: unable to start thread"
Supposed that for each "start", it takes around 10-15 seconds to finish depending on each taskName it is. My question is that, if task 1 finishes in 12 seconds, tasks 2 in 10secs and task 3 in 15 seconds. Will task 2 finish then close and leave task 1 and task 3 to run till finish, or will task 2 force task 1 and 3 to close after task 2 is finished?
Are there any arguments that we can pass to the start_new_thread method in order to archive 2 of the things that I have mentioned above:
1. First to finish forces the rest to close.
2. Each one finish individually.
Thank you
As Max Noel already mentioned, it is advised to use the Thread class instead of using start_new_thread.
Now, as for your two questions:
1. First to finish forces the rest to close
You will need two important things: a shared queue that the threads can put their ID in once they are done. And a shared Event that will signal all threads to stop working when it is triggered. The main thread will wait for the first thread to put something in the queue and will then trigger the event to stop all threads.
import threading
import random
import time
import Queue
def work(worker_queue, id, stop_event):
while not stop_event.is_set():
print "This is worker", id
# do stuff
time.sleep(random.random() * 5)
# put worker ID in queue
if not stop_event.is_set():
worker_queue.put(id)
break
# queue for workers
worker_queue = Queue.Queue()
# indicator for other threads to stop
stop_event = threading.Event()
# run workers
threads = []
threads.append(threading.Thread(target=work, args=(worker_queue, 0, stop_event)))
threads.append(threading.Thread(target=work, args=(worker_queue, 1, stop_event)))
threads.append(threading.Thread(target=work, args=(worker_queue, 2, stop_event)))
for thread in threads:
thread.start()
# this will block until the first element is in the queue
first_finished = worker_queue.get()
print first_finished, 'was first!'
# signal the rest to stop working
stop_event.set()
2. Each finish individually
Now this is much easier. Just call the join method on all Thread objects. This will wait for each thread to finish.
for thread in threads:
thread.start()
for thread in threads:
thread.join()
Btw, the above code is for Python 2.7. Let me know if you need Python 3
First off, don't use start_new_thread, it's a low-level primitive. Use the Thread class in the threading module instead.
Once you have that, Thread instances have a .join() method, which you can call from another thread (your program's main thread) to wait for them to terminate.
t1 = Thread(target=my_func)
t1.start()
# Waits for t1 to finish.
t1.join()
All threads will terminate when the process terminates.
Thus, if your main program ends after the try..except, then all three threads may get terminated prematurely. For example:
import thread
import logging
import time
logger = logging.getLogger(__name__)
def start(taskname, n):
for i in range(n):
logger.info('{}'.format(i))
time.sleep(0.1)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
try:
thread.start_new_thread( start, ("task1", 10) )
thread.start_new_thread( start, ("task2", 5) )
thread.start_new_thread( start, ("task3", 8) )
except Exception as err:
logger.exception(err)
may print something like
[14:15:16 Dummy-3] 0
[14:15:16 Dummy-1] 0
In contrast, if you place
time.sleep(5)
at the end of the script, then you see the full expected output from all three
threads.
Note also that the thread module is a low-level module; unless you have a
particular reason for using it, most often people use the threading module which
implements more useful features for dealing with threads, such as a join
method which blocks until the thread has finished. See below for an example.
The docs state:
When the function returns, the thread silently exits.
When the function terminates with an unhandled exception, a stack trace is
printed and then the thread exits (but other threads continue to run).
Thus, by default, when one thread finishes, the others continue to run.
The example above also demonstrates this.
To make all the threads exit when one function finishes is more difficult.
One thread can not kill another thread cleanly (e.g., without killing the entire
process.)
Using threading, you could arrange for the threads to set a variable
(e.g. flag) to True when finished, and have each thread check the state of
flag periodically and quit if it is True. But note that the other threads will
not necessarily terminate immediately; they will only terminate when they next
check the state of flag. If a thread is blocked, waiting for I/O for instance,
then it may not check the flag for a considerable amount of time (if ever!).
However, if the thread spends most of its time in a quick loop, you could check the state of flag once per iteration:
import threading
import logging
import time
logger = logging.getLogger(__name__)
def start(taskname, n):
global flag
for i in range(n):
if flag:
break
logger.info('{}'.format(i))
time.sleep(0.1)
else:
# get here if loop finishes without breaking
logger.info('FINISHED')
flag = True
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
threads = list()
flag = False
try:
threads.append(threading.Thread(target=start, args=("task1", 10) ))
threads.append(threading.Thread(target=start, args=("task2", 5) ))
threads.append(threading.Thread(target=start, args=("task3", 8) ))
except Exception as err:
logger.exception(err)
for t in threads:
t.start()
for t in threads:
# make the main process wait until all threads have finished.
t.join()

Categories

Resources