Python3: detect keypresses asynchronously and communicate them to the main thread - python

Disclaimer: the import statements are within the functions, I know that this is uncommon. I am showing my whole program here by showing it function for function while telling my issue and my thinking. In reality, I am doing something different, I made this minimal example just for this Stackoverflow question. There are duplicate questions, but I did not find good answers in them since those only say "use multithreading" (e.g. this answer). This particular question concerns itself with how to use multithreading.
The Story: I am running a program in Python. Let's say that it is a while loop going on to infinity. It just runs happily. For example,
def job(threadname, q):
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
What I want to be able to do is that it asynchronously detects a keypress on stdin and then interrupts execution, so that I can do whatever I want within the function it is interrupted in (or if I am running it with python3 -i program.py, to switch over to the REPL with all my modules loaded in, remember this is a minimal example in which I do not want to highlight such concerns too much).
My idea was: I have one function that asynchronously gets the keypress, sends it via a queue to the other thread and it works. So I extended the job function as such:
def job(threadname, q):
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
ch = q.get() #extension for multithreading
handle_keypress(ch) #extension for handling keypresses
The code for handle_keypress(ch) is:
def handle_keypress(key):
if (key == "q"):
print("Quit thread")
exit(0)
elif (key == "s"):
print("would you like to change the step size? This has not been implemented yet.")
else:
print("you pressed another key, how nice! Unfortunately, there are not anymore options available yet.")
In other words, not that interesting other than to showcase that I want to be able to do this.
At first the issue seemed to be in the job() function. The culprit is q.get(), which is hanging. However, it is hanging because my input thread for some reason is not asynchronous and blocks. I have no clue how to make it unblocked.
This is the function of my input thread:
def get_input(threadname, q):
#get one character, this code is adapted from https://stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user
while True:
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
q.put(ch)
It is obvious to me that sys.stdin.read(1) is blocking, but I don't know how to make it unblocked. In the state that it is now, I also cannot think of a way to handle the poison pill situation, which is why q.get() in the job() function is blocking.
I run the program by calling the following function:
def run_program():
from threading import Thread
from queue import Queue
queue = Queue()
thread1 = Thread( target=get_input, args=("Thread-1", queue) )
thread2 = Thread( target=job, args=("Thread-2", queue) )
thread1.start()
thread2.start()
thread1.join()
thread2.join()
My Questions: is this how you would design a program to deal with asynchronous keypresses? If so, how do I make the get_input() function unblocked?

Thanks to Sav I found a way to answer this question. In my opinion, his comment is the answer. So if he'll rewrite his comment. I'll accept his answer. For now, I will show what part of the code I changed in order to get a non-blocking implementation working:
def job(threadname, q):
from queue import Empty
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
#Below is the changed part
ch = None
try:
ch = q.get(block=False)
except Empty:
pass
if ch is not None:
handle_keypress(ch)

Related

Console buffers when I clear during loops

Code:
import os, ctypes, threading
cool = 0
started_t = True
threads = 10
lock = threading.Lock()
def output():
global cool
cool += 1
lock.acquire()
os.system("cls")
print("Current Counter:", cool)
lock.release()
while started_t:
if threading.active_count() <= threads:
try:
threading.Thread(target = output).start()
except:
pass
The code is an example but what I am trying to do is: A specific value will be changed throughout my code and constantly using threading the console is cleared (cls) and then printed. Even using threading.Lock() it still doesn't print properly.
The main problem:
It prints too fast, I heard something like "buffers" and there are ways to get around it. I googled and try to find anything but no luck. Sometimes it's printing more than 1 at once etc. More threads the worse :(

python input() and print() in multithreading [duplicate]

This question already has an answer here:
Python: How to NOT wait for a thread to finish to carry on? [duplicate]
(1 answer)
Closed 7 months ago.
I am using win10 and python 3.7.3 32-bit
I am trying to achieve the following: every second a data read-out is performed an printed, while a control loop waits for user input to control a device. see my code:
import device
import threading
from threading import Lock
def print_data(t_start):
while True:
data=getData() # generic example
print(f'data: {data} at time {time.time()-t_start}')
def control_func():
while True:
in_ = input(' Press ENTER to switch ON, Press x to quit')
if in_ == '':
device.on()
print('device ON')
elif in_ == 'x' or in_ == 'X':
device.off()
sys.exit()
else: continue
in_ = input(' Press ENTER to switch OFF, Press x to quit')
if in_ == '':
device.off()
print('device OFF')
elif in_ == 'x' or in_ == 'X':
device.off()
sys.exit()
else: continue
t_start = time.time()
device=device()
trd1 = threading.Thread(target=control_func())
trd2 = threading.Thread(target=print_data(t_start))
trd1.start() # starting the thread 1
trd2.start() # starting the thread 2
trd1.join()
trd2.join()
This only gives me the input statements from control_func() or the prints from print_data()
Same wih using mutliprocessing.
I didn't manage to make the two functions run simultanously.
replacing print() with
s_print_lock = Lock()
# Define a function to call print with the Lock
def s_print(*a, **b):
"""Thread safe print function"""
with s_print_lock:
print(*a, **b)
also didn't do the trick.
Since I am noob, please help. Or should I do a different approach all together?
You called the functions in the process of creating the Thread, you didn't pass the functions to the Thread for it to execute, so no actual work occurred in the threads. Change the Thread creation to:
trd1 = threading.Thread(target=control_func) # Remove call parens
trd2 = threading.Thread(target=print_data, args=(t_start,)) # Remove call parens and pass args tuple separately
so the functions themselves are passed, and the Thread actually runs them in the separate logical thread of execution. As written, you ran control_func to completion, then ran print_data to completion, then launched two Threads with target=None (the returned value from both functions) which do nothing, then joined the Threads doing nothing.
Additional notes:
If you're using multiprocessing, make sure to use multiprocessing.Lock, not threading.Lock (the latter is only guaranteed to work within a single process)
While the threaded cases likely doesn't require a lock (at least on CPython where the GIL protects against issues) if you're only printing a single atomic thing at a time, for multiprocessing you should definitely use the lock and add flush=True to all prints; without the flush=True, the actual output might be delayed indefinitely.
You need to provide some means to communicate that the loop is done to print_data; as written, control_func will sys.exit(), but that only exits that thread, not the program. Unless getData will somehow throw an exception as a result of control_func exiting, print_data will never exit, and therefore neither will the main thread.
Solutions for #3 include:
Using a threading.Event(); make a global should_stop = threading.Event(), change the print_data loop to while not should_stop.is_set():, and after trd1.join() returns, have the main thread call should_stop.set()
Make trd2 a daemon thread, and don't bother to join it (assumes it can die immediately when trd1 and the main thread have ended; might not hold for your scenario); it will die forcefully when all non-daemon threads have exited.

Keep program open until key pressed - with threads and subprocess - unwanted key interception

tl;dr: I have several threads, one being a thread listening to input() to keep the program running/exit on keypress. But at one time in the program I need to stop this listener or it will intercept the input for a subprocessed program.
Long version:
- Program should download some data, then hand this over to some other console program to be processed.
- Program should either run until download is finished or until ENTER-keypress has been sent.
- In both cases the download thread will be ended gracefully and the external processing should be done.
- Problem: The input() function is still listening and intercepting the first input to the subprocess'ed console program.
import os
import subprocess
import threading
import time
def thread_do_downloads():
# does some downloads and will set the flag "flag_download_completed=True"
# eventually to signal download completed
# for this example just set the flag
global flag_download_completed
flag_download_completed = True
def do_stuff_with_downloaded_data():
# this is of course not the program I would call,
# but this example should show how the input would be intercepted
if os.name == 'nt':
parameters = ["set", "/p", "variable=Press Enter"] # for this example (Windows) call "set", this program will wait for a user input
else:
parameters = ["read", "variable"] # hope this works for linux...
p1 = subprocess.Popen(parameters, shell=True)
p1.communicate()
def listen_for_keypress():
input()
print("keypress intercepted")
def main():
dl = threading.Thread(target=thread_do_downloads)
dl.start()
kill_listener = threading.Thread(target=listen_for_keypress, daemon=True) # daemon: to not have it lingering after main thread is done
kill_listener.start()
print("Press ENTER to stop downloading.")
while True:
if not kill_listener.is_alive() or flag_download_completed:
break
time.sleep(1)
# here are some lines to make sure the download thread above completes gracefully
do_stuff_with_downloaded_data()
print("All done")
if __name__ == '__main__':
flag_download_completed = False
main()
Will result in:
Press ENTER to stop downloading.
Press Enter << stopped here until I pressed ENTER
keypress intercepted << stopped here until I pressed ENTER
All done
If you can keep the main thread on top of the console, maybe you could take advantage of the fact that input() is going to block the main thread until Enter is pressed. Once the execution continues (because Enter was pressed), communicate to the running threads that they have to stop using an Event (another example here). If you do want to listen for S.O. signals, I suggest you take a look to the signal module (watch out, some features may be O.S dependent).
import threading
import time
def thread_do_downloads(stop_activated):
# does some downloads and will set the flag "flag_download_completed=True"
# eventually to signal download completed
# for this example just set the flag
global flag_download_completed
while not stop_activated.is_set():
time.sleep(0.5)
print("ZZZZZZZ")
def do_stuff_with_downloaded_data():
print("doing stuff with downloaded data")
def main():
stop_activated = threading.Event()
dl = threading.Thread(target=thread_do_downloads, args=(stop_activated,))
dl.start()
input("Press ENTER to stop downloading.")
stop_activated.set()
print("stopping (waiting for threads to finish...)")
dl.join()
# here are some lines to make sure the download thread above completes gracefully
do_stuff_with_downloaded_data()
print("All done")
if __name__ == '__main__':
main()
EDIT (as per the OP's comment):
One of the complications that the original question has is how to communicate the termination request to a subprocess. Because processes don't share memory with the parent process (the process who spawned it) this can, indeed, only (or almost only) be done through actual SO signals. Because of this memory isolation, any flags set on the parent process will have no effect in the spawned subprocesses: the only way of inter process communication is either through OS signals, or through files (or file-like structures) that both parent and child process "known about" and use to share information. Also, calling an input() in the parent binds the standard input (stdin) to that process which means by default, the subprocesses are unaware about the keys pressed in the parent (you could always bind the stdin of the child process to the stdin of the parent, but that would complicate a bit more the code)
Fortunately, the instances of Popen do offer a nice way to send signals to the child process: the TERM signal, which the subprocess could catch and is supposed to interpret as "Hey, you're gonna be stopped real soon, so do your clean-up things, close files and so on and exit" and the KILL signal that doesn't really tell anything to the subprocess (can't be caught): it just kills it (In Linux, for instance a KILL signal removes all access to memory from the killed process so any action that uses memory, such as a seek for next operation will cause an error. More info here)
To demonstrate that, let's say we have a simple script.py file in the same directory where our main program is located that looks like this:
script.py >
#!/usr/bin/env python
import sys
import random
import time
def main():
done = False
while not done:
time.sleep(0.5)
print("I'm busy doing things!!")
done = random.randint(0, 15) == 1
if __name__ == "__main__":
main()
sys.exit(0) # This is pretty much unnecessary, though
A script that would take a random time to process and that can, potentially, be quite long (at least long enough to demonstrate)
Now, we could create one (or many) subprocesses in a tread that run that script.py file, regularly check their status (using poll()) and if the user has requested the forced output send a TERM signal and a bit later a KILL if necessary.
import threading
import time
import subprocess
def thread_do_downloads(stop_activated):
p = subprocess.Popen('./script.py', stdout=subprocess.PIPE)
while p.poll() is None:
time.sleep(0.5)
print("Subprocess still running... Slepping a bit... ZzzzzzZZZ")
if stop_activated.is_set():
print("Forcing output requested!!!")
print("Trying to terminate the process nicely, which a SIGTERM:")
p.terminate()
time.sleep(0.5)
if p.poll() is None:
print("Not being nice anymore... Die, die die!!")
p.kill()
print("This is what the subprocess 'said':\n%s" % p.stdout.read())
return
print("stopping normally")
def do_stuff_with_downloaded_data():
print("doing stuff with downloaded data")
def listen_for_keypress(stop_activated):
input("Press ENTER to stop downloading.")
print("keypress intercepted")
stop_activated.set()
def main():
stop_activated = threading.Event()
dl = threading.Thread(target=thread_do_downloads, args=(stop_activated,))
dl.start()
kill_listener = threading.Thread(target=listen_for_keypress, args=(stop_activated,), daemon=True)
kill_listener.start()
dl.join()
print("Finished downloading data")
# here are some lines to make sure the download thread above completes gracefully
do_stuff_with_downloaded_data()
print("All done")
if __name__ == '__main__':
main()

How to stop an infinite loop safely in Python?

I've got a script that runs an infinite loop and adds things to a database and does things that I can't just stop halfway through, so I can't just press Ctrl+C and stop it.
I want to be able to somehow stop a while loop, but let it finish it's last iteration before it stops.
Let me clarify:
My code looks something like this:
while True:
do something
do more things
do more things
I want to be able to interrupt the while loop at the end, or the beginning, but not between doing things because that would be bad.
And I don't want it to ask me after every iteration if I want to continue.
Thanks for the great answers, I'm super grateful but my implementation doesn't seem to be working:
def signal_handler(signal, frame):
global interrupted
interrupted = True
class Crawler():
def __init__(self):
# not relevant
def crawl(self):
interrupted = False
signal.signal(signal.SIGINT, signal_handler)
while True:
doing things
more things
if interrupted:
print("Exiting..")
break
When I press Ctrl+C the program just keeps going ignoring me.
What you need to do is catch the interrupt, set a flag saying you were interrupted but then continue working until it's time to check the flag (at the end of each loop). Because python's try-except construct will abandon the current run of the loop, you need to set up a proper signal handler; it'll handle the interrupt but then let python continue where it left off. Here's how:
import signal
import time # For the demo only
def signal_handler(signal, frame):
global interrupted
interrupted = True
signal.signal(signal.SIGINT, signal_handler)
interrupted = False
while True:
print("Working hard...")
time.sleep(3)
print("All done!")
if interrupted:
print("Gotta go")
break
Notes:
Use this from the command line. In the IDLE console, it'll trample on IDLE's own interrupt handling.
A better solution would be to "block" KeyboardInterrupt for the duration of the loop, and unblock it when it's time to poll for interrupts. This is a feature of some Unix flavors but not all, hence python does not support it (see the third "General rule")
The OP wants to do this inside a class. But the interrupt function is invoked by the signal handling system, with two arguments: The signal number and a pointer to the stack frame-- no place for a self argument giving access to the class object. Hence the simplest way to set a flag is to use a global variable. You can rig a pointer to the local context by using closures (i.e., define the signal handler dynamically in __init__(), but frankly I wouldn't bother unless a global is out of the question due to multi-threading or whatever.
Caveat: If your process is in the middle of a system call, handling an signal may interrupt the system call. So this may not be safe for all applications. Safer alternatives would be (a) Instead of relying on signals, use a non-blocking read at the end of each loop iteration (and type input instead of hitting ^C); (b) use threads or interprocess communication to isolate the worker from the signal handling; or (c) do the work of implementing real signal blocking, if you are on an OS that has it. All of them are OS-dependent to some extent, so I'll leave it at that.
the below logic will help you do this,
import signal
import sys
import time
run = True
def signal_handler(signal, frame):
global run
print("exiting")
run = False
signal.signal(signal.SIGINT, signal_handler)
while run:
print("hi")
time.sleep(1)
# do anything
print("bye")
while running this, try pressing CTRL + C
To clarify #praba230890's solution: The interrupted variable was not defined in the correct scope. It was defined in the crawl function and the handler could not reach it as a global variable, according to the definition of the handler at the root of the program.
Here is edited example of the principle above. It is the infinitive python loop in a separate thread with the safe signal ending. Also has thread-blocking sleep step - up to you to keep it, replace for asyncio implementation or remove.
This function could be imported to any place in an application, runs without blocking other code (e.g. good for REDIS pusub subscription). After the SIGINT catch the thread job ends peacefully.
from typing import Callable
import time
import threading
import signal
end_job = False
def run_in_loop(job: Callable, interval_sec: int = 0.5):
def interrupt_signal_handler(signal, frame):
global end_job
end_job = True
signal.signal(signal.SIGINT, interrupt_signal_handler)
def do_job():
while True:
job()
time.sleep(interval_sec)
if end_job:
print("Parallel job ending...")
break
th = threading.Thread(target=do_job)
th.start()
You forgot to add global statement in crawl function.
So result will be
import signal
def signal_handler(signal, frame):
global interrupted
interrupted = True
class Crawler():
def __init__(self):
... # or pass if you don't want this to do anything. ... Is for unfinished code
def crawl(self):
global interrupted
interrupted = False
signal.signal(signal.SIGINT, signal_handler)
while True:
# doing things
# more things
if interrupted:
print("Exiting..")
break
I hope below code would help you:
#!/bin/python
import sys
import time
import signal
def cb_sigint_handler(signum, stack):
global is_interrupted
print("SIGINT received")
is_interrupted = True
if __name__ == "__main__":
is_interrupted = False
signal.signal(signal.SIGINT, cb_sigint_handler)
while True:
# do stuff here
print("processing...")
time.sleep(3)
if is_interrupted:
print("Exiting..")
# do clean up
sys.exit(0)

Why print operation within signal handler may change deadlock situation?

I got simple program as below:
import threading
import time
import signal
WITH_DEADLOCK = 0
lock = threading.Lock()
def interruptHandler(signo, frame):
print str(frame), 'received', signo
lock.acquire()
try:
time.sleep(3)
finally:
if WITH_DEADLOCK:
print str(frame), 'release'
lock.release()
signal.signal(signal.SIGINT, interruptHandler)
for x in xrange(60):
print time.strftime("%H:%M:%S"), 'main thread is working'
time.sleep(1)
So, if you start that program and even Ctrl+C is pressed twice within 3 seconds, there is no deadlock. Each time you press Ctrl + C proper line is displayed.
If you change WITH_DEADLOCK=1 and you would press Ctrl+C twice (withing 3 seconds) then program will be hung.
Does anybody may explain why print operation make such difference?
(My python version is 2.6.5)
To be honest I think J.F. Sebastian's comment is the most appropriate answer here - you need to make your signal handler reentrant, which it currently isn't, and it is mostly just surprising that it works anyway without the print statement.

Categories

Resources