I'm reading from a named pipe in a blocking manner.
I want my python script to react to SIGTERM signals.
This is what i've got so far:
#!/usr/bin/python3
import signal
def handler_stop_signals(signum, frame):
global fifo
fifo.close()
exit
signal.signal(signal.SIGTERM, handler_stop_signals)
fifo = open("someNamedPipe", "r")
while True:
for line in fifo:
doSomething
fifo.close()
exit
When the script receives a SIGTERM signal, it closes the pipe as expected but raises a RuntimeError.
RuntimeError: reentrant call inside <_io.BufferedReader name='someNamedPipe'>
Is there another way to get out of the foor loop and close the fifo gently?
TL;DR You need a non-blocking read in order to be able to control termination; asyncio using aiofiles is probably the most elegant solution, but they all have their quirks.
Sample Producer
I'm going to start with how one would go about writing a well-behaved producer of data to a named pipe, because it's an easier vehicle to introduce some APIs.
import os
from threading import Event
class Producer:
def __init__(self, path):
self.path = path
self.event = Event()
def start(self):
os.mkfifo(self.path)
try:
print('Waiting for a listener...')
with open(self.path, 'w') as fifo:
fifo.write('Starting the convoluted clock...\n')
fifo.flush()
while not self.event.wait(timeout=1):
print('Writing a line...')
fifo.write(str(datetime.now()) + '\n')
fifo.flush()
fifo.write('The convoluted clock has finished.\n')
fifo.flush()
print('Finished.')
finally:
os.unlink(self.path)
def stop(self, *args, **kwargs):
self.event.set()
producer = Producer('/tmp/someNamedPipe')
signal.signal(signal.SIGINT, producer.stop)
signal.signal(signal.SIGTERM, producer.stop)
producer.start()
This writes the current date out to the named pipe as a string once a second. SIGINT and SIGTERM will both shut the pipe down gracefully, writing The convoluted clock has finished. as the last line to the pipe before closing down. It uses a threading.Event to communicate between the stop method (which will be run on a background thread) and start (which waits for at most one second before advancing to the next iteration of the loop). self.event.wait(timeout=1) immediately returns True if the signal is set, or False after waiting at most one second without the signal being set.
Sample (Buggy) Consumer
It would be tempting to use a similar technique to write the consumer:
import signal, os
from threading import Event
class BuggyConsumer:
def __init__(self, path):
self.path = path
self.event = Event()
def start(self):
with open(self.path, 'r') as fifo:
# we'll be a bit more aggressive on checking for termination
# because we could have new data for us at any moment!
while not self.event.wait(0.1):
print('Got from the producer:', fifo.readline())
print('The consumer was nicely stopped.')
# technically the pipe gets closed AFTER this print statement
# because we're using a with block
finally:
fifo.close()
def stop(self, *args, **kwargs):
self.event.set()
consumer = BuggyConsumer('/tmp/someNamedPipe')
signal.signal(signal.SIGINT, consumer.stop)
signal.signal(signal.SIGTERM, consumer.stop)
consumer.start()
Unfortunately this won't work great in practice because open() opens files in blocking mode. This means read() calls block the calling thread, which essentially prevent "nice" aborts unless you check in-between read calls. Concretely, if the producer stops producing but kept the pipe open, the consumer would sit forever at fifo.readline() and would never get around to checking the signal for "nice" termination.
Sample (Less Buggy) Consumer
This example avoids the problem of a misbehaving producer trapping the consumer in a blocking read call, but it's considerably more complicated and forces you to use lower-level APIs that are not nearly as friendly:
import signal, os
from threading import Event
class ComplicatedConsumer:
def __init__(self, path):
self.path = path
self.event = Event()
def start(self):
# Open a file descriptor in a non-blocking way.
fifo = os.open(self.path, os.O_RDONLY | os.O_NONBLOCK)
try:
while not self.event.wait(0.1):
try:
# This is FAR from a comprehensive implementation.
# We're making some pretty yucky assumptions.
line = os.read(fifo, 1000).decode('utf8')
if line:
print('Got from the producer:', line)
else:
print('EOF from the producer.')
break
except BlockingIOError:
# the call to os.read would have blocked (meaning we're
# caught up)
pass
print('The consumer was nicely stopped.')
finally:
os.close(fifo)
def stop(self, *args, **kwargs):
self.event.set()
A proper implementation would be FAR more complicated, because this code naively assumes that:
each read() call from the pipe is a single, complete "message"; this is the worst assumption. You could speed up the producer and see that this less buggy consumer starts reading multiple "lines" as a single line.
a line never spans more than 1000 bytes; a more advanced implementation would need to buffer "partial" messages, look for newlines, and split accordingly
In all but the most simplistic and slow-moving use cases (like, say, a once-a-second ticking clock), this implementation would need a TON of work in order to be practically useful.
Sample Consumer (asyncio)
The challenge in writing this properly is that there are multiple unpredictable sources of events (signals, incoming data from a pipe). asyncio allows you to express your code as coroutines, and they can be suspended and resumed when Python feels like it, but with you specifying the rules.
import asyncio
import aiofiles
class AsyncConsumer:
def __init__(self, path):
loop = asyncio.get_event_loop()
self.path = path
self.fifo_closed = loop.create_future()
self.fifo = None
async def start(self):
import aiofiles
self.fifo = await aiofiles.open(self.path, 'r')
done, pending = await asyncio.wait(
[self._read_lines(), self.fifo_closed],
return_when=asyncio.FIRST_COMPLETED)
print('The consumer is going to be nicely stopped...')
await self.fifo.close()
print('The consumer was nicely stopped.')
async def _read_lines(self):
try:
async for line in self.fifo:
print('Got from the producer:', line)
print('EOF from the producer.')
except ValueError:
# aiofiles raises a `ValueError` when the underlying file is closed
# from underneath it
pass
def stop(self, *args, **kwargs):
if self.fifo is not None:
print('we got the message')
self.fifo_closed.set_result(None)
loop = asyncio.get_event_loop()
consumer = AsyncConsumer('/tmp/someNamedPipe')
loop.add_signal_handler(signal.SIGINT, consumer.stop)
loop.add_signal_handler(signal.SIGTERM, consumer.stop)
loop.run_until_complete(consumer.start())
The async start() method kicks off two threads streams things of work: one which reads lines one-by-one as they come in, and the other which essentially hangs until a signal is received. It proceeds when EITHER of these two things finishes.
Unfortunately I noticed that ultimately aiofiles relies on a blocking implementation under the hood, because the await self.fifo.close() method still hangs if a read() is in progress. But at least there is a spot to place your code.
Wrapping it up
Ultimately there isn't a super great out-of-the-box solution to solving your problem, but hopefully one of these variations can help you solve your problem.
Related
import signal
import asyncio
from typing import Optional
from types import FrameType
async def main() -> None:
server = (
await asyncio.start_server(
lambda reader, writer: None,
'127.0.0.1',
7070
)
)
tasks = (
asyncio.create_task(server.serve_forever()),
asyncio.create_task(event.wait())
)
async with server:
await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
# Some shutdown logic goes here.
def handle_signal(signal: int, frame: Optional[FrameType]) -> None:
event.set()
if __name__ == '__main__':
event = asyncio.Event()
signal.signal(signal.SIGINT, handle_signal)
asyncio.run(main())
I want to gracefully terminate an asyncio server if user sends SIGINT (or, in other words, presses CTRL+C).
For this reason, asyncio.Event is used: I use asyncio.wait in the main coroutine to wait till either the server has been stopped or SIGINT has been received. Signal handler has been set accordingly.
The problem is, the solution does not work (tested on Alpine Linux). Can somebody explain why exactly? Can I workaround it somehow?
An interrupt can happen at any time and the handler is called between two Python bytecode instructions. In general, there are only few simple functions that are safe to call in a signal handler, beacause buffers or internal data may be in an inconsistent state. The recommendation is only to set a flag that is periodically checked in the program's main loop.
In asyncio, we can handle the interrupt like something happening in another thread. Technically it is in the same thread, but the point is it is not controlled by the event loop.
Asyncio is not threadsafe, but there are few helpers. call_soon_threadsafe schedules a callback to be called asap (like call_soon), but in addition it wakes up the event loop.
def handle_signal(signal: int, frame: Optional[FrameType]) -> None:
asyncio.get_running_loop().call_soon_threadsafe(evset)
def evset():
event.set()
What is the correct way to send a disconnect signal to a thread containing a SingleServerIRCBot?
I am instantiating bots that connect to twitch with
import threading
import irc.bot
class MyBot(irc.bot.SingleServerIRCBot):
...
bot = MyBot(...)
threads = []
t = threading.Thread(target=bot.start()
threads.append(t)
t.start()
When the stream no longer exists, no matter what I've tried, I haven't been able to get the thread to successfully end. How should I go about sending a signal to the thread that tells it to exit the channel kill the bot and then itself?
The code for the .start method can be found here https://github.com/jaraco/irc/blob/master/irc/bot.py#L331
My first thought is to override that method with a while loop that has an exit condition. I haven't had any luck with that so far though.
Furthermore, there is a .die method here https://github.com/jaraco/irc/blob/master/irc/bot.py#L269 but how can I call that method when the thread is executing an infinite loop?
Trying to kill the threads directly ends up with them persisting, and eventually throwing errors about the total number of threads that my process is running.
Edit for the bounty: I would also accept an answer that describes a better way to handle multiple IRC bots at once.
I don't think you could (or should) kill a thread directly, but you could stop the task running on that thread. Then the thread would be inactive and you could remove it from the threads list, if you like. I'm not familiar with SingleServerIRCBot, but I'll use the class below as an example.
import time
class MyTask:
def __init__(self):
self._active = True
def start(self):
while self._active:
print('running')
time.sleep(1)
def die(self):
self._active = False
In Python3, threads have a _target attribute, from which we can access the target function/method. We could use this attribute to access the target's object and call the die method (eg: thread._target.__self__.die()). However I think it would be best to subclass Thread and store the the target object in a variable, as _target is a private attribute, and also for compatibility reasons.
import threading
class MyThread(threading.Thread):
def __init__(self, target, args=()):
super(MyThread, self).__init__()
self.target = target
self.args = args
def run(self):
self.target.start(*self.args)
def stop_task(self):
self.target.die()
Using this class we would pass a MyTask object as a target, and the start method would be called from MyThread.run. Now we can use MyThread.stop_task to stop the task running on this thread.
o = MyTask()
t = MyThread(target=o)
t.start()
t.stop_task()
time.sleep(1.1)
print(t.is_alive())
Note that I'm waiting 1.1 sec to test if the thread is alive. That's because the target (MyTask.start) will take up to one second to stop. This method doesn't kill the thread, but calls MyTask.die and waits for the task to finish. If you want to end the task immediately (and loose any resources used by the task) you could use a Process and end it with .terminate. You should also choose multiprocessing over multithreading if your task is performing more CPU operations than IO operations, because processes are not limited by the GIL.
Afrer stydying the source code, I noticed that .die() calls sys.exit, so we can't use it to terminate the task because it would stop the program. It seems the reason for this is that .start() calls the parent object's .start(), which then calls the .process_forever() method of a Reactor object. This method starts running Reactor.process_once() in an infinite loop with no break condition.
A possible solution is to subclass SingleServerIRCBot and use a boolean variabe to break the loop. This class should override .start() and .die(), in order to stop the bot running on a thread. The .die() method would set the flag to false, and .start() would call Reactor.process_once() in a loop.
import irc.bot
class MyBot(irc.bot.SingleServerIRCBot):
def __init__(self, channel, nickname, server, port=6667):
super(MyBot, self).__init__([(server, port)], nickname, nickname)
self.channel = channel
self._active = True
def start(self):
self._connect()
while self._active:
self.reactor.process_once(timeout=0.2)
def die(self, msg="Bye, cruel world!"):
self.connection.disconnect(msg)
self._active = False
Now we can stop the bot either by calling .stop_task() on the thread running the bot, or by calling the .die() method of the bot directly.
host, port = 'irc.freenode.net', 6667
nick = 'My-Bot'
channel = '#python'
bot = MyBot(channel, nick, host, port)
t = MyThread(bot)
t.start()
t.stop_task()
#bot.die()
I have a producer thread that produces data from a serial connection and puts them into multiple queues that will be used by different consumer threads. However, I'd like to be able to add in additional queues (additional consumers) from the main thread after the producer thread has already started running.
I.e. In the code below, how could I add a Queue to listOfQueues from the main thread while this thread is running? Can I add in a method such as addQueue(newQueue) to this class which appends to it listOfQueues? This doesn't seem likely as the thread will be in the run method. Can I create some sort of Event similar to the stop event?
class ProducerThread(threading.Thread):
def __init__(self, listOfQueues):
super(ProducerThread, self).__init__()
self.listOfQueues = listOfQueues
self._stop_event = threading.Event() # Flag to be set when the thread should stop
def run(self):
ser = serial.Serial() # Some serial connection
while(not self.stopped()):
try:
bytestring = ser.readline() # Serial connection or "producer" at some rate
for q in self.listOfQueues:
q.put(bytestring)
except serial.SerialException:
continue
def stop(self):
'''
Call this function to stop the thread. Must also use .join() in the main
thread to fully ensure the thread has completed.
:return:
'''
self._stop_event.set()
def stopped(self):
'''
Call this function to determine if the thread has stopped.
:return: boolean True or False
'''
return self._stop_event.is_set()
Sure, you can simply have an append function that adds to your list. E.g.
def append(self, element):
self.listOfQueues.append(element)
That will work even after your thread's start() method has been called.
Edit: for non thread-safe procedures you can use a lock, e.g.:
def unsafe(self, element):
with self.lock:
# do stuff
You would then also need to add the lock inside your run method, e.g.:
with lock:
for q in self.listOfQueues:
q.put(bytestring)
Any code acquiring a lock will wait for the lock to be released elsewhere.
Can somebody provide a sample of code which listen to keypress in nonblocking manner with asynio and put the keycode in console on every click?
It's not a question about some graphical toolkit
So the link provided by Andrea Corbellini is a clever and thorough solution to the problem, but also quite complicated. If all you want to do is prompt your user to enter some input (or simulate raw_input), I prefer to use the much simpler solution:
import sys
import functools
import asyncio as aio
class Prompt:
def __init__(self, loop=None):
self.loop = loop or aio.get_event_loop()
self.q = aio.Queue()
self.loop.add_reader(sys.stdin, self.got_input)
def got_input(self):
aio.ensure_future(self.q.put(sys.stdin.readline()), loop=self.loop)
async def __call__(self, msg, end='\n', flush=False):
print(msg, end=end, flush=flush)
return (await self.q.get()).rstrip('\n')
prompt = Prompt()
raw_input = functools.partial(prompt, end='', flush=True)
async def main():
# wait for user to press enter
await prompt("press enter to continue")
# simulate raw_input
print(await raw_input('enter something:'))
loop = aio.get_event_loop()
loop.run_until_complete(main())
loop.close()
EDIT: I removed the loop parameter form Queue as it is removed in 3.10.
Also, these days I use structured concurrency (trio), and if anyone is curious this is pretty easy to do in trio:
import trio, sys
async def main():
async with trio.lowlevel.FdStream(sys.stdin.fileno()) as stdin:
async for line in stdin:
if line.startswith(b'q'):
break
print(line)
trio.run(main)
I wrote something similar as part of a package called aioconsole.
It provides a coroutine called get_standard_streams that returns two asyncio streams corresponding to stdin and stdout.
Here's an example:
import asyncio
import aioconsole
async def echo():
stdin, stdout = await aioconsole.get_standard_streams()
async for line in stdin:
stdout.write(line)
loop = asyncio.get_event_loop()
loop.run_until_complete(echo())
It also includes an asynchronous equivalent to input:
something = await aioconsole.ainput('Entrer something: ')
It should work for both file and non-file streams. See the implementation here.
Reading lines
The high-level pure-asyncio way to do this is as follows.
import asyncio
import sys
async def main():
# Create a StreamReader with the default buffer limit of 64 KiB.
reader = asyncio.StreamReader()
pipe = sys.stdin
loop = asyncio.get_event_loop()
await loop.connect_read_pipe(lambda: asyncio.StreamReaderProtocol(reader), pipe)
async for line in reader:
print(f'Got: {line.decode()!r}')
asyncio.run(main())
The async for line in reader loop can be written more explicitly, e.g. if you want to print a prompt or catch exceptions inside the loop:
while True:
print('Prompt: ', end='', flush=True)
try:
line = await reader.readline()
if not line:
break
except ValueError:
print('Line length went over StreamReader buffer limit.')
else:
print(f'Got: {line.decode()!r}')
An empty line (not '\n' but an actually empty string '') means end-of-file. Note that it is possible for await reader.readline() to return '' right after reader.at_eof() returned False. See Python asyncio: StreamReader for details.
Here readline() is asynchronously gathering a line of input. That is, the event loop can run while the reader waits for more characters. In contrast, in the other answers, the event loop could block: it could detect that some input is available, enter the function calling sys.stdin.readline(), and then block on it until an endline becomes available (blocking any other tasks from entering the loop). Of course this isn't a problem in most cases, as the endline becomes available together with (in case of line buffering, which is the default) or very soon after (in other cases, assuming reasonably short lines) any initial characters of a line.
Reading character by character
You can also read individual bytes with await reader.readexactly(1) to read byte-per-byte, when reading from a pipe. When reading key-presses from a terminal, it needs to be set up properly, see Key Listeners in python? for more. On UNIX:
import asyncio
import contextlib
import sys
import termios
#contextlib.contextmanager
def raw_mode(file):
old_attrs = termios.tcgetattr(file.fileno())
new_attrs = old_attrs[:]
new_attrs[3] = new_attrs[3] & ~(termios.ECHO | termios.ICANON)
try:
termios.tcsetattr(file.fileno(), termios.TCSADRAIN, new_attrs)
yield
finally:
termios.tcsetattr(file.fileno(), termios.TCSADRAIN, old_attrs)
async def main():
with raw_mode(sys.stdin):
reader = asyncio.StreamReader()
loop = asyncio.get_event_loop()
await loop.connect_read_pipe(lambda: asyncio.StreamReaderProtocol(reader), sys.stdin)
while not reader.at_eof():
ch = await reader.read(1)
# '' means EOF, chr(4) means EOT (sent by CTRL+D on UNIX terminals)
if not ch or ord(ch) <= 4:
break
print(f'Got: {ch!r}')
asyncio.run(main())
Note this is not really one character or one key at a time: if the user presses a key combination that gives a multi-byte character, like ALT+E, nothing will happen on pressing ALT and two bytes will be sent by the terminal on pressing E, which will result in two iterations of the loop. But it's good enough for ASCII characters like letters and ESC.
If you need actual key presses like ALT, I suppose the only way is to use a suitable library and make it work with asyncio by calling it in a separate thread, like here. In fact the library+thread approach is probably simpler in other cases as well.
Under the hood
If you want finer control you can implement your own protocol in place of StreamReaderProtocol: a class implementing any number of functions of asyncio.Protocol. Minimal example:
class MyReadProtocol(asyncio.Protocol):
def __init__(self, reader: asyncio.StreamReader):
self.reader = reader
def connection_made(self, pipe_transport):
self.reader.set_transport(pipe_transport)
def data_received(self, data: bytes):
self.reader.feed_data(data)
def connection_lost(self, exc):
if exc is None:
self.reader.feed_eof()
else:
self.reader.set_exception(exc)
You could replace the StreamReader with your own buffering mechanism. After you call connect_read_pipe(lambda: MyReadProtocol(reader), pipe), there will be exactly one call to connection_made, then arbitrary many calls to data_received (with data depending on terminal and python buffering options), then eventually exactly one call to connection_lost (on end-of-file or on error). In case you ever need them, connect_read_pipe returns a tuple (transport, protocol), where protocol is an instance of MyReadProtocol (created by the protocol factory, which in our case is a trivial lambda), while transport is an instance of asyncio.ReadTransport (specifically some private implementation like _UnixReadPipeTransport on UNIX).
But in the end this is all boilerplate that eventually relies on loop.add_reader (unrelated to StreamReader).
For Windows you might need to choose the ProactorEventLoop (the default since Python 3.8), see Python asyncio: Platform Support.
An alternative to using queues would be to make the command line an asyn generator, and process the commands as they come in, like so:
import asyncio
import sys
class UserInterface(object):
def __init__(self, task, loop):
self.task = task
self.loop = loop
def get_ui(self):
return asyncio.ensure_future(self._ui_task())
async def _ui_cmd(self):
while True:
cmd = sys.stdin.readline()
cmd = cmd.strip()
if cmd == 'exit':
self.loop.stop()
return
yield cmd
async def _ui_task(self):
async for cmd in self._ui_cmd():
if cmd == 'stop_t':
self.task.stop()
elif cmd == 'start_t':
self.task.start()
Python 3.10 update to the solution provided by bj0:
class Prompt:
def __init__(self):
self.loop = asyncio.get_running_loop()
self.q = asyncio.Queue()
self.loop.add_reader(sys.stdin, self.got_input)
def got_input(self):
asyncio.ensure_future(self.q.put(sys.stdin.readline()), loop=self.loop)
async def __call__(self, msg, end='\n', flush=False):
print(msg, end=end, flush=flush)
# https://docs.python.org/3/library/asyncio-task.html#coroutine
task = asyncio.create_task(self.q.get())
return (await task).rstrip('\n')
I tested it on a websocket client, inside an async function that would get stuck waiting for input, so I replaced s = input("insert string") with s = await prompt("insert string") and now ping-ponging works even while the program is waiting for user input, the connection does not stop anymore and the issue "timed out waiting for keepalive pong" is solved.
I am developing a multi-threaded application in python. I have following scenario.
There are 2-3 producer threads which communicate with DB and get some data in large chunks and fill them up in a queue
There is an intermediate worker which breaks large chunks fetched by producer threads into smaller ones and fill them up in another queue.
There are 5 consumer threads which consume queue created by intermediate worker thread.
objects of data sources are accessed by producer threads through their API. these data sources are completely separate. So these producer understands only presence or absence of data which is supposed to be given out by data source object.
I create threads of these three types and i make main thread wait for completion of these threads by calling join() on them.
Now for such a setup I want a common error handler which senses failure of any thread, any exception and decides what to do. For e.g if I press ctrl+c after I start my application, main thread dies but producer, consumer threads continue to run. I would like that once ctrl+c is pressed entire application should shut down. Similarly if some DB error occurs in data source module, then producer thread should get notified of that.
This is what I have done so far:
I have created a class ThreadManager, it's object is passed to all threads. I have written an error handler method and passed it to sys.excepthook. This handler should catch exceptions, error and then it should call methods of ThreadManager class to control the running threads. Here is snippet:
class Producer(threading.Thread):
....
def produce():
data = dataSource.getData()
class DataSource:
....
def getData():
raise Exception("critical")
def customHandler(exceptionType, value, stackTrace):
print "In custom handler"
sys.excepthook = customHandler
Now when a thread of producer class calls getData() of DataSource class, exception is thrown. But this exception is never caught by my customHandler method.
What am I missing? Also in such scenario what other strategy can I apply? Please help. Thank you for having enough patience to read all this :)
What you need is a decorator. In essence you are modifying your original function and putting in inside a try-except:
def exception_decorator(func):
def _function(*args):
try:
result = func(*args)
except:
print('*** ESC default handler ***')
os._exit(1)
return result
return _function
If your thread function is called myfunc, then you add the following line above your function definition
#exception_decorator
def myfunc():
pass;
Can't you just catch "KeyboardInterrupt" when pressing Ctrl+C and do:
for thread in threading.enumerate():
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
and have a flag in each threaded class which is self.alive
you could theoretically call thread.alive = False and have it stop gracefully?
for thread in threading.enumerate():
thread.alive = False
time.sleep(5) # Grace period
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
example:
import os
from threading import *
from time import sleep
class worker(Thread):
def __init__(self):
self.alive = True
Thread.__init__(self)
self.start()
def run(self):
while self.alive:
sleep(0.1)
runner = worker()
try:
raw_input('Press ctrl+c!')
except:
pass
for thread in enumerate():
thread.alive = False
sleep(1)
try:
thread._Thread__stop()
thread._Thread__delete()
except:
pass
# There will always be 1 thread alive and that's the __main__ thread.
while len(enumerate()) > 1:
sleep(1)
os._exit(0)
Try going about it by changing the internal system exception handler?
import sys
origExcepthook = sys.excepthook
def uberexcept(exctype, value, traceback):
if exctype == KeyboardInterrupt:
print "Gracefully shutting down all the threads"
# enumerate() thingie here.
else:
origExcepthook(exctype, value, traceback)
sys.excepthook = uberexcept