How to detect if a Python Thread is being killed? - python

I have my own Thread called TimeBasedLogThread. I would like to fire a function my_function when the TimeBasedLogThread is being killed because the main process is exiting. I would like to do it from within this object. Is it possible to do so?
Here is my current approach:
class TimeBasedBufferingHandler(MemoryHandler):
# This is a logging-based handler that buffers logs to send
# them as emails
# the target of this handler is a SMTPHandler
def __init__(self, capacity=10, flushLevel=logging.ERROR, target=None,
flushOnClose=True, timeout=60):
MemoryHandler.__init__(self, capacity=capacity, flushLevel=flushLevel,
target=target, flushOnClose=flushOnClose)
self.timeout = timeout # in seconds (as time.time())
def flush(self):
# Send the emails that are younger than timeout, all together
# in the same email
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=60):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
def run(self):
while True:
self.handler.flush()
time.sleep(self.timeout)
def my_function(self):
print("my_function is being called")
self.handler.flush()
def setup_thread():
smtp_handler = SMTPHandler()
new_thread = TimeBasedLogThread(smtp_handler, timeout=10)
new_thread.start()
In my main thread, I have:
setup_thread()
logging.error("DEBUG_0")
time.sleep(5)
logging.error("DEBUG_1")
time.sleep(5)
logging.error("DEBUG_2")
The time.sleep(5) releases the main thread 5 seconds before the timeout of my other thread. So, I receive the first 2 emails with "DEBUG_0" and "DEBUG_1", but not the last one "DEBUG_2" because the main process exits before the timeout has finished.
I would like to link the class TimeBasedLogThread and the function my_function that will flush (send the emails) before exiting. How can I do that? I looked at the source code of threading but I did not understand what method I could use.

Build your function as a Thread too. (Ex: AfterDeadThread)
You have two strategy here:
TimeBasedLogThread call AfterDeadThread before die
AfterDeadThread check if TimeBasedLogThread is alive or not, if not it will run some methods

Extend run() method (representing the thread’s activity) to fire the on_terminate handler passed to custom thread’s constructor as keyword argument.
On a slightly changed custom thread class (for demonstration):
from threading import Thread
import time, random
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=2, on_terminate=None):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
self.terminate_handler = on_terminate
def run(self):
while True:
num = self.handler()
if num > 5:
break
time.sleep(self.timeout)
print(num)
if self.terminate_handler:
self.terminate_handler()
def my_term_function():
print("my_function is being called")
f = lambda: random.randint(3, 10)
tlog_thread = TimeBasedLogThread(f, on_terminate=my_term_function)
tlog_thread.start()
tlog_thread.join()
Sample output:
3
4
5
4
5
my_function is being called

Related

Can I run cleanup code in daemon threads in python?

Suppose I have some consumer daemon threads that constantly take objects from a queue whenever the main thread puts them there and performs some long operation (a couple of seconds) with them.
The problem is that whenever the main thread is done, the daemon threads are killed before they finish processing whatever is left in the queue.
I know that one way to solve this could be to wait for the daemon threads to finish processing whatever is left in the queue and then exit, but I am curious if there is any way for the daemon threads to "clean up" after themselves (i.e. finish processing whatever is left in the queue) when the main thread exits, without explicitly having the main thread tell the daemon threads to start cleaning up.
The motivation behind this is that I made a python package that has a logging handler class that puts items into a queue whenever the user tries to log something (e.g. with logging.info("message")), and the handler has a daemon thread that sends the logs over the network. I'd prefer if the daemon thread could clean up by itself, so users of the package wouldn't have to manually make sure to make their main thread wait for the log handler to finish its processing.
Minimal working example
# this code is in my package
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = Queue()
self.thread = Thread(target=self.consume, daemon=True)
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
send(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
# This is user's main code
# user will have to keep a reference to the handler for later. I want to avoid this.
my_handler = MyHandler()
# set up logging
logging.basicConfig(..., handlers=[..., my_handler])
# do some stuff...
logging.info("this will be sent over network")
# some more stuff...
logging.error("also sent over network")
# even more stuff
# before exiting must wait for handler to finish sending
# I don't want user to have to do this
my_hanler.queue.join()
You can use threading.main_thread.join() which will wait until shutdown like so:
import threading
import logging
import queue
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = queue.Queue()
self.thread = threading.Thread(target=self.consume) # Not daemon
# Shutdown thread
threading.Thread(
target=lambda: threading.main_thread().join() or self.queue.put(None)
).start()
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
if record is None:
print("cleaning")
return # Cleanup
print(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
Quick test code:
logging.getLogger().setLevel(logging.INFO)
logging.getLogger().addHandler(MyHandler(logging.INFO))
logging.info("Hello")
exit()
You can use atexit to wait until the daemon thread shuts down:
import queue, threading, time, logging, atexit
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = queue.Queue()
self.thread = threading.Thread(target=self.consume, daemon=True)
# Right before main thread exits, signal cleanup and wait until done
atexit.register(lambda: self.queue.put(None) or self.thread.join())
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
if record is None: # Cleanup requested
print("cleaning")
time.sleep(5)
return
print(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
# Test code
logging.getLogger().setLevel(logging.INFO)
logging.getLogger().addHandler(MyHandler(logging.INFO))
logging.info("Hello")

JoinableQueue between two process, Two processes are blocking forever sometimes

I am writing multiprocess program. There are four class: Main, Worker, Request and Ack. The Main class is the entry point of program. It will create the sub-process called Worker to do some jobs. The main process put the Request into JoinableQueue, and than Worker get request from queue. When Worker finished the request, it will put the ACK into queue. The part of code shown as below:
Main:
class Main():
def __init__(self):
self.cmd_queue = JoinableQueue()
self.worker = Worker(self.cmd_queue)
def call_worker(self, cmd_code):
if self.cmd_queue.empty() is True:
request = Request(cmd_code)
self.cmd_queue.put(request)
self.cmd_queue.join()
ack = self.cmd_queue.get()
self.cmd_queue.task_done()
if ack.value == 0:
return True
else:
return False
else:
# TODO: Error Handling.
pass
def run_worker(self):
self.worker.start()
Worker:
class Worker(Process):
def __init__(self, cmd_queue):
super(Worker, self).__init__()
self.cmd_queue = cmd_queue
...
def run(self):
while True:
ack = Ack(0)
try:
request = self.cmd_queue.get()
if request.cmd_code == ReqCmd.enable_handler:
self.enable_handler()
elif request.cmd_code == ReqCmd.disable_handler:
self.disable_handler()
else:
pass
except Exception:
ack.value = -1
finally:
self.cmd_queue.task_done()
self.cmd_queue.put(ack)
self.cmd_queue.join()
It often works normally. But Main process stuck at self.cmd_queue.join(), and the Worker stuck at self.cmd_queue.join() sometimes. It is so weird! Does anyone have any ideas? Thanks
There's nothing weird in the above issue: you shouldn't call queue's join within a typical single worker process activity because
Queue.join()
Blocks until all items in the queue have been gotten and
processed.
Such a calls where they are in your current implementation will make the processing pipeline wait.
Usually queue.join() is called in the main (supervisor) thread after initiating/starting all threads/workers.
https://docs.python.org/3/library/queue.html#queue.Queue.join

pyserial time based repeated data request

I am writing an application in python to acquire data using a serial communication. I use the pyserial library for establishing the communication. What is the best approach to request data in an interval (eg every 2 seconds). I always have to the send a request, wait for the answer and start the process again.
if this a "slow" process, that does not accurate time precision, use a while loop and time.sleep(2) to timeout the process for 2 seconds.
I thought about using a separate thread to prevent the rest of the applicaiton from freezing. The thread takes a function which requests data from the instrument.
class ReadingThread(Thread):
'''Used to read out from the instrument, interrupted by an interval'''
def __init__(self, controller, lock, function, queue_out, interval=3 , *args, **kwargs):
Thread.__init__(self)
self.lock = lock
self.function = function
self.queue_out = queue_out
self.interval = interval
self.args = args
self.kwargs = kwargs
self.is_running = True
def run(self):
while self.is_running:
with self.lock:
try:
result = self.function(self.args, self.kwargs)
except Exception as e:
print(e)
else:
self.queue_out.put(result)
time.sleep(self.interval)
def stop(self):
self.is_running = False

How to implement a daemon stoppable polling thread in Python?

From my understanding there is no such out-of-the-box solution in the python stdlib.
The solution has to have the following characteristics:
Start the thread as daemon with a target function and optional arguments
Have polling: the thread should rerun the target every X seconds
Allow easy and graceful stopping: Not breaking target execution midway
Expose the ability to stop from outside the program it belongs to, in particular be able to stop the thread from testing code.
Allow thread restarting after stop (or pseudo-restarting with a new thread)
I ran across a couple of suggestions on SO, but would like to aggregate any collected knowledge here (i will contribute in that in a follow-up answer), so that any new or alternative or additional ideas be heard.
My proposal uses the threading library, as it is advertised as more high level than thread.
A middle ground is this solution, found from other SO answer:
def main():
t_stop= threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop))
t.daemon = True
t.start()
time.sleep(duration)
#stop the thread
t_stop.set()
def thread(arg, stop_event):
while(not stop_event.is_set()):
# Code to execute here
stop_event.wait(time)
This, unfortunately, requires us to have the t_stop object handy when testing -in order to stop the thread- and that handle to the object is not designed to be exposed.
A solution would be to add t and t_stop handles in a top level or global dictionary somewhere, for the testing code to reach.
Another solution (copied and improved from somewhere) is use of the following:
def main():
t = DaemonStoppableThread(sleep_time, target=target_function,
name='polling_thread',
args=(arg1, arg2))
t.start()
# Stopping code from a test
def stop_polling_threads():
threads = threading.enumerate()
polling_threads = [thread for thread in threads
if 'polling_thread' in thread.getName()]
for poll_thread in polling_threads:
poll_thread.stop()
class DaemonStoppableThread(threading.Thread):
def __init__(self, sleep_time, target=None, **kwargs):
super(DaemonStoppableThread, self).__init__(target=target, **kwargs)
self.setDaemon(True)
self.stop_event = threading.Event()
self.sleep_time = sleep_time
self.target = target
def stop(self):
self.stop_event.set()
def stopped(self):
return self.stop_event.isSet()
def run(self):
while not self.stopped():
if self.target:
self.target()
else:
raise Exception('No target function given')
self.stop_event.wait(self.sleep_time)
As good as these solutions may be, none of them face the restarting of the polling target function.
I avoided using the expression "restarting thread", as I understand that python threads cannot be restarted, so a new thread will have to be used to allow for this "pseudo-restarting"
EDIT:
To improve on the above, a solution to start/stop the polling target multiple times:
class ThreadManager(object):
def __init__(self):
self.thread = None
def start_thread(self):
if not self.thread or not self.thread.is_alive():
self.thread = DaemonStoppableThread(sleep_time=5, target=some_func, args=(1, 2))
self.thread.start()
return 'thread running'
def stop_thread(self):
if self.thread and self.thread.is_alive():
self.thread.stop()
return 'thread stopped'
else:
return 'dead thread'
def check_thread(self):
if self.thread and self.thread.is_alive():
return 'thread alive'
else:
return 'dead_thread'

Signal not connecting to a method

I am working with sockets. When I receive info from the server I handle it with a method listen that is in a thread. I want to pop up windows from here, so I use signals.
The problem is that the signal does not trigger the function. Here is a working example:
class Client(QtCore.QObject):
signal = QtCore.pyqtSignal()
def __init__(self):
super(Client, self).__init__()
self.thread_wait_server = threading.Thread(target=self.wait_server)
self.thread_wait_server.daemon = True
self.thread_wait_server.start()
def wait_server(self):
print('waiting')
self.signal.emit()
print("'signal emited")
class Main:
def Do(self):
print("'Do' starts")
self.Launch()
time.sleep(2)
print("'Do' ends")
def Launch(self):
print("'Launch' starts")
self.client = Client()
self.client.signal.connect(self.Tester)
print("'Launch' ends")
def Tester(self):
print("Tester Fired!!")
m = Main()
m.Do()
Tester function is never triggered.
The problem with your code is that, you are emitting the signal before connecting it to the slot! Add two print statements like this:
print("connecting the signal")
self.client.signal.connect(self.Tester)
print("signal connected")
You will notice that the signal gets emitted before it gets connected! That's why the slot is not triggering.

Categories

Resources