Python - Killing more than one thread on Control+C - python

What am I doing wrong ?
I simply need to kill both threads on Control+C.
def cleanup_stop_thread():
for thread in enumerate():
if thread.isAlive():
try:
self._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated')
if __name__ == '__main__':
try:
threading.Thread(target = record).start()
threading.Thread(target = ftp).start()
except (KeyboardInterrupt, SystemExit):
cleanup_stop_thread();
sys.exit()

Instead of trying to kill them on Ctrl+C, why don't you just make them daemon threads? Then they exit automatically when the main thread dies.
t1 = threading.Thread(target=record)
t1.daemon = True
t1.start()
t2 = threading.Thread(target=ftp)
t2.daemon = True
t2.start()

If you want to kill all threads when typing CTRL+C just add a try block and import os
and do os._exit(0) when you want to kill everything
also check the atexit module
Hope it helped :)

Related

Python - Is this threading / looping / waiting code bad - It 'pegs' cpu

I have a python application where thread1 calls an api to see 'what reports are ready to download' and sends that report_id to thread2 which 'downloads/processes those reports. these threads iterate over a dictionary and then wait 5 minutes. Even while code is 'doing nothing' it is pegging on CPU.
I am not sure if the CPU is being pegged in a thread or in main. In main I have some handlers to check for a stop signal so I posted most of that code. I have a few threads that do similar tasks with similar ways they wait at end of loop. Areas that I suspect that code relate to pegging of cpu a)In main - the while run: pass. b)In each of the threads is_killed = self._kill.wait(600) and that being a theading.Event()
Any idea what is pegging cpu.
if __name__ == '__main__':
t2 = ProcessReport()
t2.start()
t1 = RequestReport(t2)
t1.start()
t3 = report_test()
t3.start()
t4 = run_flask()
t4.start()
run = True
signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)
signal.signal(signal.SIGHUP, handler_stop_signals)
while run:
pass # Stay here until kill
print("About to kill all threads in clean order")
t3.kill()
t3.join()
t1.kill()
t1.join()
t2.kill()
t2.join()
print("Clean Exit")
sys.exit()
Signal Handler
def handler_stop_signals(signum, frame):
global run
run = False
One of the threads
class report_test(Thread):
def __init__(self):
Thread.__init__(self)
self._kill = threading.Event()
def kill(self):
self._kill.set()
def run(self):
while True:
cursor.execute("SELECT * FROM tbl_rpt_log where cron='1'")
reports_to_run = cursor.fetchall()
for row in reports_to_run:
report_on_row(row)
is_killed = self._kill.wait(600)
if is_killed:
print("Killing - ReportCheckTable")
break
The problem (the main one, at least) is:
while run:
pass # Stay here until kill
This is because the only operation here is evaluating loop's condition and the CPU "can't catch a break".
I didn't look through the whole code, to understand it high level, but the quickest way to work around it is:
import time
# ...
while run:
time.sleep(0.1) # Stay here until kill

I need to stop multi thread code from running

I'm trying to stop the code from running if the user presses ctrl+shift+c. I use the code below. Unfortunately sys.exit() stops only "wait_for_ctrl_shift_c" function, but not "main_func". What should I use to stop them both?
Thanks.
def wait_for_ctrl_shift_c():
print ('wait_for_ctrl_shift_c is working')
keyboard.wait('ctrl+shift+c')
print('wait_for_ctrl_shift_c was pressed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
sys.exit()
def main_func():
a=0
while True:
print ('Working2 ',a)
a=a+1
sleep(1)
if __name__ == '__main__':
Thread(target = wait_for_ctrl_shift_c).start()
Thread(target = main_func).start()
There are multiple ways to do it. First of all you have 3 threads, one main thread and the other 2 (infinite loop & keyboard one) you create.
You can register signals and handle it, also you can call interrupt_main to interrupt main thread (not the while loop thread). Interrupt will go to main exception handler. Also instead of True i changed the second thread to have an attribute to check if it should run for clean exit.
import os
import threading
import time
import sys
import _thread
def wait_for_ctrl_shift_c():
print ('wait_for_ctrl_shift_c is working')
keyboard.wait('ctrl+shift+c')
print ('exiting thread')
_thread.interrupt_main()
sys.exit()
def main_func():
a=0
t = threading.currentThread()
while getattr(t, "run", True):
print ('Working2 ',a)
a=a+1
time.sleep(1)
print ('exiting main_func')
if __name__ == '__main__':
try:
t1 = threading.Thread(target = wait_for_ctrl_shift_c)
t2 = threading.Thread(target = main_func)
t1.start()
t2.start()
t1.join()
t2.join()
except:
print ('main exiting')
t2.run = False
sys.exit()
Open shell in another window, type ps to list running processes, and kill the Python one (via kill 3145, if 3145 is its PID) to stop them both. This way we kill the process within which these threads run.

killing a thread without waiting for join

I want to kill a thread in python. This thread can run in a blocking operation and join can't terminate it.
Simular to this:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
#kill thread
#do other stuff
My problem is that the real blocking operation is in another module that is not from me so there is no place where I can break with a running variable.
The thread will be killed when exiting the main process if you set it up as a daemon:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
sys.exit(0)
Otherwise just set a flag, I'm using a bad example (you should use some synchronization not just a plain variable):
from threading import Thread
import time
RUNNING = True
def block():
global RUNNING
while RUNNING:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
RUNNING = False # thread will stop, not killed until next loop iteration
.... continue your stuff here
Use a running variable:
from threading import Thread
import time
running = True
def block():
global running
while running:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
running = False
# do other stuff
I would prefer to wrap it all in a class, but this should work (untested though).
EDIT
There is a way to asynchronously raise an exception in a separate thread which could be caught by a try: except: block, but it's a dirty dirty hack: https://gist.github.com/liuw/2407154
Original post
"I want to kill a thread in python." you can't. Threads are only killed when they're daemons when there are no more non-daemonic threads running from the parent process. Any thread can be asked nicely to terminate itself using standard inter-thread communication methods, but you state that you don't have any chance to interrupt the function you want to kill. This leaves processes.
Processes have more overhead, and are more difficult to pass data to and from, but they do support being killed by sending SIGTERM or SIGKILL.
from multiprocessing import Process, Queue
from time import sleep
def workfunction(*args, **kwargs): #any arguments you send to a child process must be picklable by python's pickle module
sleep(args[0]) #really long computation you might want to kill
return 'results' #anything you want to get back from a child process must be picklable by python's pickle module
class daemon_worker(Process):
def __init__(self, target_func, *args, **kwargs):
self.return_queue = Queue()
self.target_func = target_func
self.args = args
self.kwargs = kwargs
super().__init__(daemon=True)
self.start()
def run(self): #called by self.start()
self.return_queue.put(self.target_func(*self.args, **self.kwargs))
def get_result(self): #raises queue.Empty if no result is ready
return self.return_queue.get()
if __name__=='__main__':
#start some work that takes 1 sec:
worker1 = daemon_worker(workfunction, 1)
worker1.join(3) #wait up to 3 sec for the worker to complete
if not worker1.is_alive(): #if we didn't hit 3 sec timeout
print('worker1 got: {}'.format(worker1.get_result()))
else:
print('worker1 still running')
worker1.terminate()
print('killing worker1')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker1 is alive: {}'.format(worker1.is_alive()))
#start some work that takes 100 sec:
worker2 = daemon_worker(workfunction, 100)
worker2.join(3) #wait up to 3 sec for the worker to complete
if not worker2.is_alive(): #if we didn't hit 3 sec timeout
print('worker2 got: {}'.format(worker2.get_result()))
else:
print('worker2 still running')
worker2.terminate()
print('killing worker2')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker2 is alive: {}'.format(worker2.is_alive())

KeyboardInterrupt does not work in multi threading python

I am trying to do multi threading to check the network connection. My code is:
exitFlag = 0
lst_doxygen=[]
lst_sphinx=[]
class myThread (threading.Thread):
def __init__(self, counter):
threading.Thread.__init__(self)
self.counter=counter
def run(self):
print "Starting thread"
link_urls(self.counter)
def link_urls(delay):
global lst_doxygen
global lst_sphinx
global exitFlag
while exitFlag==0:
try:
if network_connection() is True:
try:
links = lxml.html.parse(gr.prefs().get_string('grc', 'doxygen_base_uri', '').split(',')[1]+"annotated.html").xpath("//a/#href")
for url in links:
lst_doxygen.append(url)
links = lxml.html.parse(gr.prefs().get_string('grc', 'sphinx_base_uri', '').split(',')[1]+"genindex.html").xpath("//a/#href")
for url in links:
lst_sphinx.append(url)
exitFlag=1
except IOError, AttributeError:
pass
time.sleep(delay)
print "my"
except KeyboardInterrupt:
exitFlag=1
def network_connection():
network=False
try:
response = urllib2.urlopen("http://google.com", None, 2.5)
network=True
except urllib2.URLError, e:
pass
return network
I have set a flag to stop the thread inside while loop. I also want to exit the thread by pressing Ctrl-C. So I have used try-except but thread is still working and does not exit. If I try to use
if KeyboardInterrupt:
exitFlag=1
instead of try-except, thread just works for first time execution of while loop and then exist.
p.s.
I have created the instance of myThread class in another module.
Finally, I got the answer of my question. I need to flag my thread as Daemon. So when I will create the instance if myThread class, I will add one more line:
thread1.myThread(2)
thread1.setDaemon(True)
thread1.start()
You only get signals or KeyboardInterrupt on the main thread. There are various ways to handle it, but perhaps you could make exitFlag a global and move the exception handler to your main thread.
Here is how I catch a CTRL-C in general.
import time
import signal
import sys
stop = False
def run():
while not stop:
print 'I am alive'
time.sleep(3)
def signal_handler(signal, frame):
global stop
print 'You pressed Ctrl+C!'
stop = True
t1 = threading.Thread(target=run)
t1.start()
signal.signal(signal.SIGINT, signal_handler)
print 'Press Ctrl+C'
signal.pause()
output:
python threads.py
Press Ctrl+C
I am alive
I am alive
^CYou pressed Ctrl+C!

threading ignores KeyboardInterrupt exception

I'm running this simple code:
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
But when I run it, it prints
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
In fact python thread ignore my Ctrl+C keyboard interrupt and doesn't print Received Keyboard Interrupt. Why? What is wrong with this code?
Try
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
Without the call to time.sleep, the main process is jumping out of the try...except block too early, so the KeyboardInterrupt is not caught. My first thought was to use thread.join, but that seems to block the main process (ignoring KeyboardInterrupt) until the thread is finished.
thread.daemon=True causes the thread to terminate when the main process ends.
To summarize the changes recommended in the comments, the following works well for me:
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
My (hacky) solution is to monkey-patch Thread.join() like this:
def initThreadJoinHack():
import threading, thread
mainThread = threading.currentThread()
assert isinstance(mainThread, threading._MainThread)
mainThreadId = thread.get_ident()
join_orig = threading.Thread.join
def join_hacked(threadObj, timeout=None):
"""
:type threadObj: threading.Thread
:type timeout: float|None
"""
if timeout is None and thread.get_ident() == mainThreadId:
# This is a HACK for Thread.join() if we are in the main thread.
# In that case, a Thread.join(timeout=None) would hang and even not respond to signals
# because signals will get delivered to other threads and Python would forward
# them for delayed handling to the main thread which hangs.
# See CPython signalmodule.c.
# Currently the best solution I can think of:
while threadObj.isAlive():
join_orig(threadObj, timeout=0.1)
else:
# In all other cases, we can use the original.
join_orig(threadObj, timeout=timeout)
threading.Thread.join = join_hacked
Putting the try ... except in each thread and also a signal.pause() in true main() works for me.
Watch out for import lock though. I am guessing this is why Python doesn't solve ctrl-C by default.

Categories

Resources