How to send a CTRL-C signal to individual threads in Python? - python

I am trying to figure out how to properly send a CTRL-C signal on Windows using Python. Earlier I was messing around with youtube-dl and embedded it into a PyQt Qthread to do the processing and created a stop button to stop the thread but when trying to download a livestream I was unable to get FFMPEG to stop even after closing the application and I'd have to manually kill the process which breaks the video every time.
I knew I'd have to send it a CTRL-C signal somehow and ended up using this.
os.kill(signal.CTRL_C_EVENT, 0)
I was actually able to get it to work but if you try to download more than one video and try to stop one of the threads with the above signal it would kill all the downloads.
Is there any way to send the signal to just one thread without effecting the others?
Here is an example of some regular Python code with 2 seperate threads where the CTRL-C signal is fired in thread_2 after 10 seconds which ends up killing thread_1.
import os
import signal
import threading
import time
import youtube_dl
def thread_1():
print("thread_1 running")
url = 'https://www.cbsnews.com/common/video/cbsn_header_prod.m3u8'
path = 'C:\\Users\\Richard\\Desktop\\'
ydl_opts = {
'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best',
'outtmpl': '{0}%(title)s-%(id)s.%(ext)s'.format(path),
'nopart': True,
}
ydl_opts = ydl_opts
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
try:
ydl.download([url])
except KeyboardInterrupt:
print('stopped')
def thread_2():
print("thread_2 running")
time.sleep(10)
os.kill(signal.CTRL_C_EVENT, 0)
def launch_thread(target, message, args=[], kwargs={}):
def thread_msg(*args, **kwargs):
target(*args, **kwargs)
print(message)
thread = threading.Thread(target=thread_msg, args=args, kwargs=kwargs)
thread.start()
return thread
if __name__ == '__main__':
thread1 = launch_thread(thread_1, "finished thread_1")
thread2 = launch_thread(thread_2, "finished thread_2")
Does anyone have any suggestions or ideas? Thanks.

It is not possible to send signals to another thread, so you need to do something else.
You could possibly raise an exception in another thread, using this hack (for which I won't copy the source here because it comes with an MIT license):
http://tomerfiliba.com/recipes/Thread2/
With that, you could send a KeyboardInterrupt exception to the other thread, which is what happens with Ctrl-C anyway.
While it seems like this would do what you want, it would still break the video which is currently downloading.
On the other hand, since you seem to only be interested in killing all threads when the main thread exits, that can be done in a much simpler way:
Configure all threads as daemons, e.g.:
thread = threading.Thread(target=thread_msg, args=args, kwargs=kwargs)
thread.daemon = True
thread.start()
These threads will exit when the main thread exits, without any additional intervention needed from you.

Is there any way to send the signal to just one thread without effecting the others?
I am not a Python expert, but if I was trying to solve your problem, after reading about signal handling in Python3, I would start planning to use multiple processes instead of using multiple threads within a single process.

You can use signal.pthread_kill
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped

Related

Understanding implementation of parallel programming via threading

Scenarion
Sensor is continuously sending data in an interval of 100 milliseconds ( time needs to be configurable)
One Thread read the data continuously from sensor and write it to a common queue
This process is continuous until keyboard interrupt press happens
Thread 2 locks queue, ( may momentarily block Thread1)
Read full data from queue to temp structure
Release the queue
process the data in it. It is a computational task. While performing this task. Thread 1 should keep on filling the buffer with sensor data.
I have read about threading and GIL, so step 7 cannot afford to have any loss in data sent by the sensor while performing the computational process() on thread 2.
How this can be implemented using Python?
What I started with it is
import queue
from threading import Thread
import queue
from queue import Queue
q = Queue(maxsize=10)
def fun1():
fun2Thread = Thread(target=fun2)
fun2Thread.start()
while True:
try:
q.put(1)
except KeyboardInterrupt:
print("Key Interrupt")
fun2Thread.join()
def fun2():
print(q.get())
def read():
fun1Thread = Thread(target=fun1)
fun1Thread.start()
fun1Thread.join()
read()
The issue I'm facing in this is the terminal is stuck after printing 1. Can someone please guide me on how to implement this scenario?
Here's an example that may help.
We have a main program (driver), a client and a server. The main program manages queue construction and the starting and ending of the subprocesses.
The client sends a range of values via a queue to the client. When the range is exhausted it tells the server to terminate. There's a delay (sleep) in enqueueing the data for demonstration purposes.
Try running it once without any interrupt and note how everything terminates nicely. Then run again and interrupt (Ctrl-C) and again note a clean termination.
from multiprocessing import Queue, Process
from signal import signal, SIGINT, SIG_IGN
from time import sleep
def client(q, default):
signal(SIGINT, default)
try:
for i in range(10):
sleep(0.5)
q.put(i)
except KeyboardInterrupt:
pass
finally:
q.put(-1)
def server(q):
while (v := q.get()) != -1:
print(v)
def main():
q = Queue()
default = signal(SIGINT, SIG_IGN)
(server_p := Process(target=server, args=(q,))).start()
(client_p := Process(target=client, args=(q, default))).start()
client_p.join()
server_p.join()
if __name__ == '__main__':
main()
EDIT:
Edited to ensure that the server process continues to drain the queue if the client is terminated due to a KeyboardInterrupt (SIGINT)

Python function to print a dot to console while HTTP call executes

I am somewhat new to Python. I have looked around but cannot find an answer that fits exactly what I am looking for.
I have a function that makes an HTTP call using the requests package. I'd like to print a '.' to the screen (or any char) say every 10 seconds while the HTTP requests executes, and stop printing when it finishes. So something like:
def make_call:
rsp = requests.Post(url, data=file)
while requests.Post is executing print('.')
Of course the above code is just pseudo code but hopefully illustrates what I am hoping to accomplish.
Every function call from the requests module is blocking, so your program waits until the function returns a value. The simplest solution is to use the built-in threading library which was already suggested. Using this module allows you to use code "parallelism"*. In your example you need one thread for the request which will be blocked until the request finished and the other for printing.
If you want to learn more about more advanced solutions see this answer https://stackoverflow.com/a/14246030/17726897
Here's how you can achieve your desired functionality using the threading module
def print_function(stop_event):
while not stop_event.is_set():
print(".")
sleep(10)
should_stop = threading.Event()
thread = threading.Thread(target=print_function, args=[should_stop])
thread.start() # start the printing before the request
rsp = requests.Post(url, data=file) # start the requests which blocks the main thread but not the printing thread
should_stop.set() # request finished, signal the printing thread to stop
thread.join() # wait for the thread to stop
# handle response
* parallelism is in quotes because of something like the Global Interpreter Lock (GIL). Code statements from different threads aren't executed at the same time.
i don't really getting what you looking for but if you want two things processed at the same time you can use multithreading module
Example:
import threading
import requests
from time import sleep
def make_Request():
while True:
req = requests.post(url, data=file)
sleep(10)
makeRequestThread = threading.Thread(target=make_Request)
makeRequestThread.start()
while True:
print("I used multi threading to do two tasks at a same time")
sleep(10)
or you can use very simple schedule module to schedule your tasks in a easy way
docs: https://schedule.readthedocs.io/en/stable/#when-not-to-use-schedule
import threading
import requests
from time import sleep
#### Function print and stop when answer comes ###
def Print_Every_10_seconds(stop_event):
while not stop_event.is_set():
print(".")
sleep(10)
### Separate flow of execution ###
Stop = threading.Event()
thread = threading.Thread(target=Print_Every_10_seconds, args=[Stop])
### Before the request, the thread starts printing ###
thread.start()
### Blocking of the main thread (the print thread continues) ###
Block_thread_1 = requests.Post(url, data=file)
### Thread stops ###
Stop.set()
thread.join()
The below code also solves the problem asked. It will print "POST Data.." and additional trailing '.' every second until the HTTP POST returns.
import concurrent.futures as fp
import logging
with fp.ThreadPoolExecutor(max_workers=1) as executor:
post = executor.submit(requests.post, url, data=fileobj, timeout=20)
logging.StreamHandler.terminator = ''
logging.info("POST Data..")
while (post.running()):
print('.', end='', flush=True)
sleep(1)
print('')
logging.StreamHandler.terminator = '\n'
http_response = post.result()

How to graceful shut down coroutines with Ctrl+C?

I'm writing a spider to crawl web pages. I know asyncio maybe my best choice. So I use coroutines to process the work asynchronously. Now I scratch my head about how to quit the program by keyboard interrupt. The program could shut down well after all the works have been done. The source code could be run in python 3.5 and is attatched below.
import asyncio
import aiohttp
from contextlib import suppress
class Spider(object):
def __init__(self):
self.max_tasks = 2
self.task_queue = asyncio.Queue(self.max_tasks)
self.loop = asyncio.get_event_loop()
self.counter = 1
def close(self):
for w in self.workers:
w.cancel()
async def fetch(self, url):
try:
async with aiohttp.ClientSession(loop = self.loop) as self.session:
with aiohttp.Timeout(30, loop = self.session.loop):
async with self.session.get(url) as resp:
print('get response from url: %s' % url)
except:
pass
finally:
pass
async def work(self):
while True:
url = await self.task_queue.get()
await self.fetch(url)
self.task_queue.task_done()
def assign_work(self):
print('[*]assigning work...')
url = 'https://www.python.org/'
if self.counter > 10:
return 'done'
for _ in range(self.max_tasks):
self.counter += 1
self.task_queue.put_nowait(url)
async def crawl(self):
self.workers = [self.loop.create_task(self.work()) for _ in range(self.max_tasks)]
while True:
if self.assign_work() == 'done':
break
await self.task_queue.join()
self.close()
def main():
loop = asyncio.get_event_loop()
spider = Spider()
try:
loop.run_until_complete(spider.crawl())
except KeyboardInterrupt:
print ('Interrupt from keyboard')
spider.close()
pending = asyncio.Task.all_tasks()
for w in pending:
w.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(w)
finally:
loop.stop()
loop.run_forever()
loop.close()
if __name__ == '__main__':
main()
But if I press 'Ctrl+C' while it's running, some strange errors may occur. I mean sometimes the program could be shut down by 'Ctrl+C' gracefully. No error message. However, in some cases the program will be still running after pressing 'Ctrl+C' and wouldn't stop until all the works have been done. If I press 'Ctrl+C' at that moment, 'Task was destroyed but it is pending!' would be there.
I have read some topics about asyncio and add some code in main() to close coroutines gracefully. But it not work. Is someone else has the similar problems?
I bet problem happens here:
except:
pass
You should never do such thing. And your situation is one more example of what can happen otherwise.
When you cancel task and await for its cancellation, asyncio.CancelledError raised inside task and shouldn't be suppressed anywhere inside. Line where you await of your task cancellation should raise this exception, otherwise task will continue execution.
That's why you do
task.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(task) # this line should raise CancelledError,
# otherwise task will continue
to actually cancel task.
Upd:
But I still hardly understand why the original code could quit well by
'Ctrl+C' at a uncertain probability?
It dependence of state of your tasks:
If at the moment you press 'Ctrl+C' all tasks are done, non of
them will raise CancelledError on awaiting and your code will finished normally.
If at the moment you press 'Ctrl+C' some tasks are pending, but close to finish their execution, your code will stuck a bit on tasks cancellation and finished when tasks are finished shortly after it.
If at the moment you press 'Ctrl+C' some tasks are pending and
far from being finished, your code will stuck trying to cancel these tasks (which
can't be done). Another 'Ctrl+C' will interrupt process of
cancelling, but tasks wouldn't be cancelled or finished then and you'll get
warning 'Task was destroyed but it is pending!'.
I assume you are using any flavor of Unix; if this is not the case, my comments might not apply to your situation.
Pressing Ctrl-C in a terminal sends all processes associated with this tty the signal SIGINT. A Python process catches this Unix signal and translates this into throwing a KeyboardInterrupt exception. In a threaded application (I'm not sure if the async stuff internally is using threads, but it very much sounds like it does) typically only one thread (the main thread) receives this signal and thus reacts in this fashion. If it is not prepared especially for this situation, it will terminate due to the exception.
Then the threading administration will wait for the still running fellow threads to terminate before the Unix process as a whole terminates with an exit code. This can take quite a long time. See this question about killing fellow threads and why this isn't possible in general.
What you want to do, I assume, is kill your process immediately, killing all threads in one step.
The easiest way to achieve this is to press Ctrl-\. This will send a SIGQUIT instead of a SIGINT which typically influences also the fellow threads and causes them to terminate.
If this is not enough (because for whatever reason you need to react properly on Ctrl-C), you can send yourself a signal:
import os, signal
os.kill(os.getpid(), signal.SIGQUIT)
This should terminate all running threads unless they especially catch SIGQUIT in which case you still can use SIGKILL to perform a hard kill on them. This doesn't give them any option of reacting, though, and might lead to problems.

Can't catch SIGINT in multithreaded program

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx
Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

Need some assistance with Python threading/queue

import threading
import Queue
import urllib2
import time
class ThreadURL(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
host = self.queue.get()
sock = urllib2.urlopen(host)
data = sock.read()
self.queue.task_done()
hosts = ['http://www.google.com', 'http://www.yahoo.com', 'http://www.facebook.com', 'http://stackoverflow.com']
start = time.time()
def main():
queue = Queue.Queue()
for i in range(len(hosts)):
t = ThreadURL(queue)
t.start()
for host in hosts:
queue.put(host)
queue.join()
if __name__ == '__main__':
main()
print 'Elapsed time: {0}'.format(time.time() - start)
I've been trying to get my head around how to perform Threading and after a few tutorials, I've come up with the above.
What it's supposed to do is:
Initialiase the queue
Create my Thread pool and then queue up the list of hosts
My ThreadURL class should then begin work once a host is in the queue and read the website data
The program should finish
What I want to know first off is, am I doing this correctly? Is this the best way to handle threads?
Secondly, my program fails to exit. It prints out the Elapsed time line and then hangs there. I have to kill my terminal for it to go away. I'm assuming this is due to my incorrect use of queue.join() ?
Your code looks fine and is quite clean.
The reason your application still "hangs" is that the worker threads are still running, waiting for the main application to put something in the queue, even though your main thread is finished.
The simplest way to fix this is to mark the threads as daemons, by doing t.daemon = True before your call to start. This way, the threads will not block the program stopping.
looks fine. yann is right about the daemon suggestion. that will fix your hang. my only question is why use the queue at all? you're not doing any cross thread communication, so it seems like you could just send the host info as an arg to ThreadURL init() and drop the queue.
nothing wrong with it, just wondering.
One thing, in the thread run function, the while True loop, if some exception happened, the task_done() may not be called however the get() has already been called. Thus the queue.join() may never end.

Categories

Resources