Python: Ensure that Threads aren't Running in the Background - python

I'm going through an example from a book about threading, and this is the example they give:
## To use threads you need import Thread using the following code:
from threading import Thread
##Also we use the sleep function to make the thread "sleep"
from time import sleep
## To create a thread in Python you'll want to make your class work as a thread.
## For this, you should subclass your class from the Thread class
class CookBook(Thread):
def __init__(self):
Thread.__init__(self)
self.message = "Hello Parallel Python CookBook!!\n"
##this method thod prints only the message
def print_message(self):
print (self.message)
##The run method prints ten times the message
def run(self):
print ("Thread Starting\n")
x=0
while (x < 10):
self.print_message()
sleep(2)
x += 1
print ("Thread Ended\n")
#start the main process
print ("Process Started")
# create an instance of the HelloWorld class
hello_Python = CookBook()
# print the message...starting the thread
hello_Python.start()
#end the main process
print ("Process Ended")
#create an instance of the HelloWorld class
hello_Python = CookBook()
#print the message...starting the thread
hello_Python.start()
#end the main process
print ("Process Ended")
It's the first example in the first chapter, and at the end of the chapter the author says to make sure that you don't have any threads running in the background, that it's bad programming.
Question:
Given my example, how do you properly verify no threads are running in the background?

There are two ways depending on what you need:
Use join(), as comment suggested.
Set your thread daemon thread, which means calling Thread.__init__(self, daemon=True). When your main thread exits, other threads you create automatically exit too, thus you don't have to worry about them running in background.

Related

os._exit() called on another thread does not exit a program

I just start a new thread:
self.thread = ThreadedFunc()
self.thread.start()
after something happens I want to exit my program so I'm calling os._exit():
os._exit(1)
The program still works. Everything is functional and it just looks like the os._exit() didn't execute.
Is there a different way to exit a whole program from different thread? How to fix this?
EDIT: Added more complete code sample.
self.thread = DownloadThread()
self.thread.data_downloaded.connect(self.on_data_ready)
self.thread.data_progress.connect(self.on_progress_ready)
self.progress_initialized = False
self.thread.start()
class DownloadThread(QtCore.QThread):
# downloading stuff etc.
sleep(1)
subprocess.call(os.getcwd() + "\\another_process.exe")
sleep(2)
os._exit(1)
EDIT 2: SOLVED! There is a quit(), terminate() or exit() function which just stops the thread. It was that easy. Just look at the docs.
Calling os._exit(1) works for me.
You should use the standard lib threading.
I guess you are using multiprocessing, which is a process-based “threading” interface, which uses similar API to threading, but creates child process instead of child thread. so os._exit(1) only exits child process, not affecting the main process
Also you should ensure you have called join() function in the main thread. Otherwise, it is possible that the operating system schedules to run the main thread to the end before starting to do anything in child thread.
sys.exit() does not work because it is the same as raising a SystemExit exception. Raising an exception in thread only exits that thread, rather than the entire process.
Sample code. Tested under ubuntu by python3 thread.py; echo $?.
Return code is 1 as expected
import os
import sys
import time
import threading
# Python Threading Example for Beginners
# First Method
def greet_them(people):
for person in people:
print("Hello Dear " + person + ". How are you?")
os._exit(1)
time.sleep(0.5)
# Second Method
def assign_id(people):
i = 1
for person in people:
print("Hey! {}, your id is {}.".format(person, i))
i += 1
time.sleep(0.5)
people = ['Richard', 'Dinesh', 'Elrich', 'Gilfoyle', 'Gevin']
t = time.time()
#Created the Threads
t1 = threading.Thread(target=greet_them, args=(people,))
t2 = threading.Thread(target=assign_id, args=(people,))
#Started the threads
t1.start()
t2.start()
#Joined the threads
t1.join() # Cannot remove this join() for this example
t2.join()
# Possible to reach here if join() removed
print("I took " + str(time.time() - t))
Credit: Sample code is copied and modified from https://www.simplifiedpython.net/python-threading-example/

Python Paramiko Ctrl Z is not working for threading [duplicate]

I am testing Python threading with the following script:
import threading
class FirstThread (threading.Thread):
def run (self):
while True:
print 'first'
class SecondThread (threading.Thread):
def run (self):
while True:
print 'second'
FirstThread().start()
SecondThread().start()
This is running in Python 2.7 on Kubuntu 11.10. Ctrl+C will not kill it. I also tried adding a handler for system signals, but that did not help:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
To kill the process I am killing it by PID after sending the program to the background with Ctrl+Z, which isn't being ignored. Why is Ctrl+C being ignored so persistently? How can I resolve this?
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
KeyboardInterrupt and signals are only seen by the process (ie the main thread)... Have a look at Ctrl-c i.e. KeyboardInterrupt to kill threads in python
I think it's best to call join() on your threads when you expect them to die. I've taken the liberty to make the change your loops to end (you can add whatever cleanup needs are required to there as well). The variable die is checked on each pass and when it's True, the program exits.
import threading
import time
class MyThread (threading.Thread):
die = False
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run (self):
while not self.die:
time.sleep(1)
print (self.name)
def join(self):
self.die = True
super().join()
if __name__ == '__main__':
f = MyThread('first')
f.start()
s = MyThread('second')
s.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
f.join()
s.join()
An improved version of #Thomas K's answer:
Defining an assistant function is_any_thread_alive() according to this gist, which can terminates the main() automatically.
Example codes:
import threading
def job1():
...
def job2():
...
def is_any_thread_alive(threads):
return True in [t.is_alive() for t in threads]
if __name__ == "__main__":
...
t1 = threading.Thread(target=job1,daemon=True)
t2 = threading.Thread(target=job2,daemon=True)
t1.start()
t2.start()
while is_any_thread_alive([t1,t2]):
time.sleep(0)
One simple 'gotcha' to beware of, are you sure CAPS LOCK isn't on?
I was running a Python script in the Thonny IDE on a Pi4. With CAPS LOCK on, Ctrl+Shift+C is passed to the keyboard buffer, not Ctrl+C.

Python thread.Timer() not works in my process

import os
import sys
from multiprocessing import Process, Queue
import threading
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
print 'Hello :D'
threading.Timer(1, self.say_hello_again_and_again).start()
test = Test()
#test.say_hello_again_and_again()
process = Process(target=test.say_hello_again_and_again)
process.start()
this is test code.
the result:
pi#raspberrypi:~/Plant2 $ python test2.py
__init__ is called
Hello :D
If I use test.say_hello_again_and_again() , "Hello :D" is printed repeatedly.
But, process is not working as I expected. Why is "Hello :D" not being printed in my process?
What's happening in my process?
There are two problems with your code:
First: You start a process with start(). This is doing a fork, that means now you have two processes, the parent and the child running side by side. Now, the parent process immediately exits, because after start() it's the end of the program. To wait until the child has finished (which in your case is never), you have to add process.join().
I did test your suggestion, but it not works
Indeed. There is a second issue: You start a new thread with threading.Timer(1, ...).start() but then immediately end the process. Now, you don't wait until your thread started because the underlying process immediately dies. You'd need to also wait until the thread has stopped with join().
Now that's how your program would look like:
from multiprocessing import Process
import threading
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
print 'Hello :D'
timer = threading.Timer(1, self.say_hello_again_and_again)
timer.start()
timer.join()
test = Test()
process = Process(target=test.say_hello_again_and_again)
process.start()
process.join()
But this is suboptimal at best because you mix multiprocessing (which is using fork to start independent processes) and threading (which starts a thread within the process). While this is not really a problem it makes debugging a lot harder (one problem e.g. with the code above is that you can't stop it with ctrl-c because of some reason your spawned process is inherited by the OS and kept running). Why don't you just do this?
from multiprocessing import Process, Queue
import time
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
while True:
print 'Hello :D'
time.sleep(1)
test = Test()
process = Process(target=test.say_hello_again_and_again)
process.start()
process.join()

Why won't this Python child-thread allow the Parent thread to do its work?

I have the following python multi-threading program
#!/usr/bin/env python
from multiprocessing import Process
import time
child_started = False
def child_func():
global child_started
child_started = True
print "Child Started"
while True:
time.sleep(1)
print "X"
if __name__ == '__main__':
global child_started
child_thread = Process(target=child_func)
child_thread.start()
while child_started is False:
time.sleep(2)
print "Parent Starting Process"
# Do something
print "Parent Done"
child_thread.terminate()
print "Child Cancelled by Parent"
child_thread.join()
I expected the child process to do some work, but then eventually the parent thread to come in and terminate it. However that's not happening. Why? As you can see below, once the child process starts running, the Parent process gets frozen out and never does anything. Why?? How to fix.
$ ~/threads.py
~/threads.py:20: SyntaxWarning: name 'child_started' is assigned to before global declaration
Child Started
X
X
X
X
X
As #thepaul said, your child_started variable is local variable and it is not shared between multiprocessing communications.
I suggest your create a Queue, once the child process get started, put an element into the queue and checks the queue.empty() in your main process and do your work.
#!/usr/bin/env python
from multiprocessing import Process
from multiprocessing import Queue
import time
def child_func(queue):
print "Child Started"
# put anything into queue after `child_func` get invoked, indicates
# your child process is working
queue.put("started...")
while True:
time.sleep(1)
print "X"
if __name__ == '__main__':
queue = Queue()
child_thread = Process(target=child_func,args=(queue,))
child_thread.start()
# stop sleeping until queue is not empty
while queue.empty():
time.sleep(2)
print "Parent Starting Process"
# Do something
print "Parent Done"
child_thread.terminate()
print "Child Cancelled by Parent"
child_thread.join()
When you set child_started in your child_func function, you are setting a local variable, and not the global module-level one. Also, since you are using multiprocessing, the two processes won't even share the global variable either.
You should pass something with shared storage across the participating proceses, such as multiprocessing.Event, instead.
edit: whoops, I answered this first as if you were using threading instead of multiprocessing.

How to stop a looping thread in Python?

What's the proper way to tell a looping thread to stop looping?
I have a fairly simple program that pings a specified host in a separate threading.Thread class. In this class it sleeps 60 seconds, the runs again until the application quits.
I'd like to implement a 'Stop' button in my wx.Frame to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up.
Here is my threading class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets)
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
def run(self):
config = controller.getConfig()
fmt = config['timefmt']
start_time = datetime.now().strftime(fmt)
try:
if onlinecheck.check_status(self.asset):
status = "online"
else:
status = "offline"
except socket.gaierror:
status = "an invalid asset tag."
msg =("{}: {} is {}. \n".format(start_time, self.asset, status))
wx.CallAfter(self.window.Logger, msg)
And in my wxPyhton Frame I have this function called from a Start button:
def CheckAsset(self, asset):
self.count += 1
thread = PingAssets(self.count, asset, self)
self.threads.append(thread)
thread.start()
Threaded stoppable function
Instead of subclassing threading.Thread, one can modify the function to allow
stopping by a flag.
We need an object, accessible to running function, to which we set the flag to stop running.
We can use threading.currentThread() object.
import threading
import time
def doit(arg):
t = threading.currentThread()
while getattr(t, "do_run", True):
print ("working on %s" % arg)
time.sleep(1)
print("Stopping as you wish.")
def main():
t = threading.Thread(target=doit, args=("task",))
t.start()
time.sleep(5)
t.do_run = False
if __name__ == "__main__":
main()
The trick is, that the running thread can have attached additional properties. The solution builds
on assumptions:
the thread has a property "do_run" with default value True
driving parent process can assign to started thread the property "do_run" to False.
Running the code, we get following output:
$ python stopthread.py
working on task
working on task
working on task
working on task
working on task
Stopping as you wish.
Pill to kill - using Event
Other alternative is to use threading.Event as function argument. It is by
default False, but external process can "set it" (to True) and function can
learn about it using wait(timeout) function.
We can wait with zero timeout, but we can also use it as the sleeping timer (used below).
def doit(stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
print("Stopping as you wish.")
def main():
pill2kill = threading.Event()
t = threading.Thread(target=doit, args=(pill2kill, "task"))
t.start()
time.sleep(5)
pill2kill.set()
t.join()
Edit: I tried this in Python 3.6. stop_event.wait() blocks the event (and so the while loop) until release. It does not return a boolean value. Using stop_event.is_set() works instead.
Stopping multiple threads with one pill
Advantage of pill to kill is better seen, if we have to stop multiple threads
at once, as one pill will work for all.
The doit will not change at all, only the main handles the threads a bit differently.
def main():
pill2kill = threading.Event()
tasks = ["task ONE", "task TWO", "task THREE"]
def thread_gen(pill2kill, tasks):
for task in tasks:
t = threading.Thread(target=doit, args=(pill2kill, task))
yield t
threads = list(thread_gen(pill2kill, tasks))
for thread in threads:
thread.start()
time.sleep(5)
pill2kill.set()
for thread in threads:
thread.join()
This has been asked before on Stack. See the following links:
Is there any way to kill a Thread in Python?
Stopping a thread after a certain amount of time
Basically you just need to set up the thread with a stop function that sets a sentinel value that the thread will check. In your case, you'll have the something in your loop check the sentinel value to see if it's changed and if it has, the loop can break and the thread can die.
I read the other questions on Stack but I was still a little confused on communicating across classes. Here is how I approached it:
I use a list to hold all my threads in the __init__ method of my wxFrame class: self.threads = []
As recommended in How to stop a looping thread in Python? I use a signal in my thread class which is set to True when initializing the threading class.
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
self.signal = True
def run(self):
while self.signal:
do_stuff()
sleep()
and I can stop these threads by iterating over my threads:
def OnStop(self, e):
for t in self.threads:
t.signal = False
I had a different approach. I've sub-classed a Thread class and in the constructor I've created an Event object. Then I've written custom join() method, which first sets this event and then calls a parent's version of itself.
Here is my class, I'm using for serial port communication in wxPython app:
import wx, threading, serial, Events, Queue
class PumpThread(threading.Thread):
def __init__ (self, port, queue, parent):
super(PumpThread, self).__init__()
self.port = port
self.queue = queue
self.parent = parent
self.serial = serial.Serial()
self.serial.port = self.port
self.serial.timeout = 0.5
self.serial.baudrate = 9600
self.serial.parity = 'N'
self.stopRequest = threading.Event()
def run (self):
try:
self.serial.open()
except Exception, ex:
print ("[ERROR]\tUnable to open port {}".format(self.port))
print ("[ERROR]\t{}\n\n{}".format(ex.message, ex.traceback))
self.stopRequest.set()
else:
print ("[INFO]\tListening port {}".format(self.port))
self.serial.write("FLOW?\r")
while not self.stopRequest.isSet():
msg = ''
if not self.queue.empty():
try:
command = self.queue.get()
self.serial.write(command)
except Queue.Empty:
continue
while self.serial.inWaiting():
char = self.serial.read(1)
if '\r' in char and len(msg) > 1:
char = ''
#~ print('[DATA]\t{}'.format(msg))
event = Events.PumpDataEvent(Events.SERIALRX, wx.ID_ANY, msg)
wx.PostEvent(self.parent, event)
msg = ''
break
msg += char
self.serial.close()
def join (self, timeout=None):
self.stopRequest.set()
super(PumpThread, self).join(timeout)
def SetPort (self, serial):
self.serial = serial
def Write (self, msg):
if self.serial.is_open:
self.queue.put(msg)
else:
print("[ERROR]\tPort {} is not open!".format(self.port))
def Stop(self):
if self.isAlive():
self.join()
The Queue is used for sending messages to the port and main loop takes responses back. I've used no serial.readline() method, because of different end-line char, and I have found the usage of io classes to be too much fuss.
Depends on what you run in that thread.
If that's your code, then you can implement a stop condition (see other answers).
However, if what you want is to run someone else's code, then you should fork and start a process. Like this:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
now, whenever you want to stop that process, send it a SIGTERM like this:
proc.terminate()
proc.join()
And it's not slow: fractions of a second.
Enjoy :)
My solution is:
import threading, time
def a():
t = threading.currentThread()
while getattr(t, "do_run", True):
print('Do something')
time.sleep(1)
def getThreadByName(name):
threads = threading.enumerate() #Threads list
for thread in threads:
if thread.name == name:
return thread
threading.Thread(target=a, name='228').start() #Init thread
t = getThreadByName('228') #Get thread by name
time.sleep(5)
t.do_run = False #Signal to stop thread
t.join()
I find it useful to have a class, derived from threading.Thread, to encapsulate my thread functionality. You simply provide your own main loop in an overridden version of run() in this class. Calling start() arranges for the object’s run() method to be invoked in a separate thread.
Inside the main loop, periodically check whether a threading.Event has been set. Such an event is thread-safe.
Inside this class, you have your own join() method that sets the stop event object before calling the join() method of the base class. It can optionally take a time value to pass to the base class's join() method to ensure your thread is terminated in a short amount of time.
import threading
import time
class MyThread(threading.Thread):
def __init__(self, sleep_time=0.1):
self._stop_event = threading.Event()
self._sleep_time = sleep_time
"""call base class constructor"""
super().__init__()
def run(self):
"""main control loop"""
while not self._stop_event.isSet():
#do work
print("hi")
self._stop_event.wait(self._sleep_time)
def join(self, timeout=None):
"""set stop event and join within a given time period"""
self._stop_event.set()
super().join(timeout)
if __name__ == "__main__":
t = MyThread()
t.start()
time.sleep(5)
t.join(1) #wait 1s max
Having a small sleep inside the main loop before checking the threading.Event is less CPU intensive than looping continuously. You can have a default sleep time (e.g. 0.1s), but you can also pass the value in the constructor.
Sometimes you don't have control over the running target. In those cases you can use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped

Categories

Resources