Python cancel raw_input/input via writing to stdin? - python

For starters, I'm on python 2.7.5 and Windows x64, my app is targeted at those parameters.
I'm in need of a way to cancel a raw_input after a certain amount of time has passed. Currently I have my main thread starting two child threads, one is the timer (threading.Timer) and the other fires the raw_input. These both return a value to a Queue.queue that the main thread monitors. It then acts on what is sent to the queue.
# snip...
q = Queue.queue()
# spawn user thread
user = threading.Thread(target=user_input, args=[q])
# spawn timer thread (20 minutes)
timer = threading.Timer(1200, q.put, ['y'])
# wait until we get a response from either
while q.empty():
time.sleep(1)
timer.cancel()
# stop the user input thread here if it's still going
# process the queue value
i = q.get()
if i in 'yY':
# do yes stuff here
elif i in 'nN':
# do no stuff here
# ...snip
def user_input(q):
i = raw_input(
"Unable to connect in last {} tries, "
"do you wish to continue trying to "
"reconnect? (y/n)".format(connect_retries))
q.put(i)
The research that I've done so far seems to say that it's not possible to "correctly" cancel a thread. I feel that processes are too heavy for the task, though I'm not opposed to using them if that's what really needs to be done. Instead, my thought is that if the timer finishes with no user input, I can write a value to stdin and close that thread gracefully.
So, how do I write to stdin from the main thread so that the child thread accepts the input and closes gracefully?
Thanks!

You can use the threading.Thread.join method to handle the timeout. The key to getting it working is to set the daemon attribute as shown below.
import threading
response = None
def get_response():
global response
response = input("Do you wish to reconnect? ")
thread = threading.Thread(target=get_response, daemon=True)
thread.start()
thread.join(2)
if response is None:
print()
print('Exiting')
else:
print('As you wish')

Related

Understanding implementation of parallel programming via threading

Scenarion
Sensor is continuously sending data in an interval of 100 milliseconds ( time needs to be configurable)
One Thread read the data continuously from sensor and write it to a common queue
This process is continuous until keyboard interrupt press happens
Thread 2 locks queue, ( may momentarily block Thread1)
Read full data from queue to temp structure
Release the queue
process the data in it. It is a computational task. While performing this task. Thread 1 should keep on filling the buffer with sensor data.
I have read about threading and GIL, so step 7 cannot afford to have any loss in data sent by the sensor while performing the computational process() on thread 2.
How this can be implemented using Python?
What I started with it is
import queue
from threading import Thread
import queue
from queue import Queue
q = Queue(maxsize=10)
def fun1():
fun2Thread = Thread(target=fun2)
fun2Thread.start()
while True:
try:
q.put(1)
except KeyboardInterrupt:
print("Key Interrupt")
fun2Thread.join()
def fun2():
print(q.get())
def read():
fun1Thread = Thread(target=fun1)
fun1Thread.start()
fun1Thread.join()
read()
The issue I'm facing in this is the terminal is stuck after printing 1. Can someone please guide me on how to implement this scenario?
Here's an example that may help.
We have a main program (driver), a client and a server. The main program manages queue construction and the starting and ending of the subprocesses.
The client sends a range of values via a queue to the client. When the range is exhausted it tells the server to terminate. There's a delay (sleep) in enqueueing the data for demonstration purposes.
Try running it once without any interrupt and note how everything terminates nicely. Then run again and interrupt (Ctrl-C) and again note a clean termination.
from multiprocessing import Queue, Process
from signal import signal, SIGINT, SIG_IGN
from time import sleep
def client(q, default):
signal(SIGINT, default)
try:
for i in range(10):
sleep(0.5)
q.put(i)
except KeyboardInterrupt:
pass
finally:
q.put(-1)
def server(q):
while (v := q.get()) != -1:
print(v)
def main():
q = Queue()
default = signal(SIGINT, SIG_IGN)
(server_p := Process(target=server, args=(q,))).start()
(client_p := Process(target=client, args=(q, default))).start()
client_p.join()
server_p.join()
if __name__ == '__main__':
main()
EDIT:
Edited to ensure that the server process continues to drain the queue if the client is terminated due to a KeyboardInterrupt (SIGINT)

Processer Efficient Way To Listen For Variable Change In Thread

I'm running threads that output to the console.
To know when to output to the console I'm using a listener for a variable change.
But that listener is burning a lot of processing power during the loop. Is there a more efficient way to listen for the change?
Here's the code:
def output_console_item(message, sound=None, over_ride_lock=False):
log = StandardLogger(logger_name='output_console_item')
# lock to serialize console output
lock = threading.Lock()
def print_msg(message):
# Wait until console lock is released.
if over_ride_lock is False:
while True:
if CONSOLE_CH.CONSOLE_LOCK is False:
break
# time.sleep(0.10)
# Make sure the whole print completes or threads can mix up output in one line.
with lock:
print(message)
return
thread = Thread(target=print_msg, args=(message,))
thread.start()
thread.join()
if sound:
thread = Thread(target=play_sound, args=(sound,))
thread.start()
thread.join()
you don't need another thread to print things for you,
you can create a safe print, something like
lock = threading.Lock()
def safe_print(message):
with lock:
print message
and use it in all your threads
or even better use python module Logging which is already threadsafe
edit:
you can change CONSOLE_LOCK to be a real lock
and use it something like this
def print_msg(message):
# Wait until console lock is released.
if over_ride_lock is False:
with CONSOLE_LOCK:
pass
with lock:
print(message)
and instead of doing CONSOLE_LOCK = True do CONSOLE_LOCK.acquire and instead of doing CONSOLE_LOCK = False do CONSOLE_LOCK.release()

How to make sure queue is empty before exiting main thread

I have a program that has two threads, the main thread and one additional that works on handling jobs from a FIFO queue.
Something like this:
import queue
import threading
q = queue.Queue()
def _worker():
while True:
msg = q.get(block=True)
print(msg)
q.task_done()
t = threading.Thread(target=_worker)
#t.daemon = True
t.start()
q.put('asdf-1')
q.put('asdf-2')
q.put('asdf-4')
q.put('asdf-4')
What I want to accomplish is basically to make sure the queue is emptied before the main thread exits.
If I set t.daemon to be True the program will exit before the queue is emptied, however if it's set to False the program will never exit. Is there some way to make sure the thread running the _worker() method clears the queue on main thread exit?
The comments touch on using .join(), but depending on your use case, using a join may make threading pointless.
I assume that your main thread will be doing things other than adding items to the queue - and may be shut down at any point, you just want to ensure that your queue is empty before shutting down is complete.
At the end of your main thread, you could add a simple empty check in a loop.
while not q.empty():
sleep(1)
If you don't set t.daemon = True then the thread will never finish. Setting the thread as a daemon thread will mean that the thread does not cause your program to stay running when the main thread finishes.
Put a special item (e.g. None) in the queue, that signals the worker thread to stop.
import queue
import threading
q = queue.Queue()
def _worker():
while True:
msg = q.get(block=True)
if msg is None:
return
print(msg) # do your stuff here
t = threading.Thread(target=_worker)
#t.daemon = True
t.start()
q.put('asdf-1')
q.put('asdf-2')
q.put('asdf-4')
q.put('asdf-4')
q.put(None)
t.join()

How to wait for a spawned thread to finish in Python

I want to use threads to do some blocking work. What should I do to:
Spawn a thread safely
Do useful work
Wait until the thread finishes
Continue with the function
Here is my code:
import threading
def my_thread(self):
# Wait for the server to respond..
def main():
a = threading.thread(target=my_thread)
a.start()
# Do other stuff here
You can use Thread.join. Few lines from docs.
Wait until the thread terminates. This blocks the calling thread until the thread whose join() method is called terminates – either normally or through an unhandled exception – or until the optional timeout occurs.
For your example it will be like.
def main():
a = threading.thread(target = my_thread)
a.start()
a.join()

How to create global error handler in a multi-threaded python applcation

I am developing a multi-threaded application in python. I have following scenario.
There are 2-3 producer threads which communicate with DB and get some data in large chunks and fill them up in a queue
There is an intermediate worker which breaks large chunks fetched by producer threads into smaller ones and fill them up in another queue.
There are 5 consumer threads which consume queue created by intermediate worker thread.
objects of data sources are accessed by producer threads through their API. these data sources are completely separate. So these producer understands only presence or absence of data which is supposed to be given out by data source object.
I create threads of these three types and i make main thread wait for completion of these threads by calling join() on them.
Now for such a setup I want a common error handler which senses failure of any thread, any exception and decides what to do. For e.g if I press ctrl+c after I start my application, main thread dies but producer, consumer threads continue to run. I would like that once ctrl+c is pressed entire application should shut down. Similarly if some DB error occurs in data source module, then producer thread should get notified of that.
This is what I have done so far:
I have created a class ThreadManager, it's object is passed to all threads. I have written an error handler method and passed it to sys.excepthook. This handler should catch exceptions, error and then it should call methods of ThreadManager class to control the running threads. Here is snippet:
class Producer(threading.Thread):
....
def produce():
data = dataSource.getData()
class DataSource:
....
def getData():
raise Exception("critical")
def customHandler(exceptionType, value, stackTrace):
print "In custom handler"
sys.excepthook = customHandler
Now when a thread of producer class calls getData() of DataSource class, exception is thrown. But this exception is never caught by my customHandler method.
What am I missing? Also in such scenario what other strategy can I apply? Please help. Thank you for having enough patience to read all this :)
What you need is a decorator. In essence you are modifying your original function and putting in inside a try-except:
def exception_decorator(func):
def _function(*args):
try:
result = func(*args)
except:
print('*** ESC default handler ***')
os._exit(1)
return result
return _function
If your thread function is called myfunc, then you add the following line above your function definition
#exception_decorator
def myfunc():
pass;
Can't you just catch "KeyboardInterrupt" when pressing Ctrl+C and do:
for thread in threading.enumerate():
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
and have a flag in each threaded class which is self.alive
you could theoretically call thread.alive = False and have it stop gracefully?
for thread in threading.enumerate():
thread.alive = False
time.sleep(5) # Grace period
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
example:
import os
from threading import *
from time import sleep
class worker(Thread):
def __init__(self):
self.alive = True
Thread.__init__(self)
self.start()
def run(self):
while self.alive:
sleep(0.1)
runner = worker()
try:
raw_input('Press ctrl+c!')
except:
pass
for thread in enumerate():
thread.alive = False
sleep(1)
try:
thread._Thread__stop()
thread._Thread__delete()
except:
pass
# There will always be 1 thread alive and that's the __main__ thread.
while len(enumerate()) > 1:
sleep(1)
os._exit(0)
Try going about it by changing the internal system exception handler?
import sys
origExcepthook = sys.excepthook
def uberexcept(exctype, value, traceback):
if exctype == KeyboardInterrupt:
print "Gracefully shutting down all the threads"
# enumerate() thingie here.
else:
origExcepthook(exctype, value, traceback)
sys.excepthook = uberexcept

Categories

Resources