Python logging to a class attribute - python

I have two threads that are instances of some classes and are working independently. They do their stuff and then wait for each other. One of the threads can raise an exception - then I want to stop the parallel thread and the whole program. To do this I want to pass the log with exception to get it in the main program loop. But I want to get this message from logging - the same which will be displayed on the console, and in the log file - there is some reason.
Is there any supported way to do this? I saw some complicated solutions, but they wre not working for me.
I cant just append a string with "Some exception!" - it has to be correlated with logging and its formatter, and all logs have to be consistent.
How to make append "catch" the log message - is it possible?
self.exceptions.append( catch_somehow(log.error(message)) )
Example code below
import Thread
# other imports
class SomeThreadWrapper(Thread)
def __init__(self):
self.exceptions = []
# class stuff
def SomeFunction(self):
try:
# some logic
except SomeException as e:
# do something
log.error("Oh no, some exception occured!")
self.exceptions.append(<somehow_catch_the_logging_from_line_above>)
raise e
def get_exception(self):
return self.exceptions
class SomeThread(Thread)
# class stuff
if __name__ == "__main__":
# some logic
thread_wrapper = SomeThreadWrapper()
thread = SomeThread()
thread_wrapper.start()
thread.start()
thread_wrapper.join()
if len(thread_wrapper.get_exception):
<Join and Kill the thread to not waste time>
<stop_the program>
thread.join()
# some other logic
Is logging module infrastructure lets for something like this?

Related

patch object spawned in sub-processs

I am using multiprocessing package for crating sub-processes. I need to handle exceptions from sub-process. catch, report, terminate and re-spawn sub-process.
I struggle to create test for it.
I would like to patch object which represent my sub-process and raise exception to see if handling is correct.
But it looks like that object is patched only in main process and in spawned process is unchanged version. Any ideas how to accomplish requested functionality?
Example:
import multiprocessing
import time
class SubprocessClass(multiprocessing.Process):
def __init__(self) -> None:
super().__init__()
def simple_method(self):
return 42
def run(self):
try:
self.simple_method()
except Exception:
# ok, exception handled
pass
else:
# I wanted exception ! <- code goes here
assert False
#mock.patch.object(SubprocessClass, "simple_method")
def test_patch_subprocess(mock_simple_method):
mock_simple_method.side_effect = Exception("exception from mock")
subprocess = SubprocessClass()
subprocess.run()
subprocess.start()
time.sleep(0.1)
subprocess.join()
you can monkey-patch the object before it is started
(it is a bit iffy but you will get actual process running that code)
def _this_always_raises(*args, **kwargs):
raise RuntimeError("I am overridden")
def test_patch_subprocess():
subprocess = SubprocessClass()
subprocess.simple_method = _this_always_raises
subprocess.start()
time.sleep(0.1)
subprocess.join()
assert subprocess.exitcode == 0
you could also mock multiprocessing to behave like threading but that is a bit unpredictable
if you want to do it genericly for all objects you can mock the class for another one derived from the original one with only one method overriden
class SubprocessClassThatRaisesInSimpleMethod(SubprocessClass):
def simple_method(self):
raise RuntimeError("I am overridden")
# then mock with unittest mock the process spawner to use this class instead of SubprocessClass

Using context managers for recovering from celery's SoftTimeLimitExceeded

I am trying to set a maximum run time for my celery jobs.
I am currently recovering from exceptions with a context manager. I ended up with code very similar to this snippet:
from celery.exceptions import SoftTimeLimitExceeded
class Manager:
def __enter__(self):
return self
def __exit__(self, error_type, error, tb):
if error_type == SoftTimeLimitExceeded:
logger.info('job killed.')
# swallow the exception
return True
#task
def do_foo():
with Manager():
run_task1()
run_task2()
run_task3()
What I expected:
If do_foo times out in run_task1, the logger logs, the SoftTimeLimitExceeded exception is swallowed, the body of the manager is skipped, the job ends without running run_task2 and run_task3.
What I observe:
do_foo times out in run_task1, SoftTimeLimitExceeded is raised, the logger logs, the SoftTimeLimitExceeded exception is swallowed but run_task2 and run_task3 are running nevertheless.
I am looking for an answer to following two questions:
Why is run_task2 still executed when SoftTimeLimitExceeded is raised in run_task1 in this setting?
Is there an easy way to transform my code so that it can performs as expected?
Cleaning up the code
This code is pretty good; there's not much cleaning up to do.
You shouldn't return self from __enter__ if the context manager isn't designed to be used with the as keyword.
is should be used when checking classes, since they are singletons...
but you should prefer issubclass to properly emulate exception handling.
Implementing these changes gives:
from celery.exceptions import SoftTimeLimitExceeded
class Manager:
def __enter__(self):
pass
def __exit__(self, error_type, error, tb):
if issubclass(error_type, SoftTimeLimitExceeded):
logger.info('job killed.')
# swallow the exception
return True
#task
def do_foo():
with Manager():
run_task1()
run_task2()
run_task3()
Debugging
I created a mock environment for debugging:
class SoftTimeLimitExceeded(Exception):
pass
class Logger:
info = print
logger = Logger()
del Logger
def task(f):
return f
def run_task1():
print("running task 1")
raise SoftTimeLimitExceeded
def run_task2():
print("running task 2")
def run_task_3():
print("running task 3")
Executing this and then your program gives:
>>> do_foo()
running task 1
job killed.
This is the expected behaviour.
Hypotheses
I can think of two possibilities:
Something in the chain, probably run_task1, is asynchronous.
celery is doing something weird.
I'll run with the second hypothesis because I can't test the former.
I've been bitten by the obscure behaviour of a combination between context managers, exceptions and coroutines before, so I know what sorts of problems it causes. This seems like one of them, but I'll have to look at celery's code before I can go any further.
Edit: I can't make head nor tail of celery's code, and searching hasn't turned up the code that raises SoftTimeLimitExceeded to allow me to trace it backwards. I'll pass it on to somebody more experienced with celery to see if they can work out how it works.

How to detect exceptions in concurrent.futures in Python3?

I have just moved on to python3 as a result of its concurrent futures module. I was wondering if I could get it to detect errors. I want to use concurrent futures to parallel program, if there are more efficient modules please let me know.
I do not like multiprocessing as it is too complicated and not much documentation is out. It would be great however if someone could write a Hello World without classes only functions using multiprocessing to parallel compute so that it is easy to understand.
Here is a simple script:
from concurrent.futures import ThreadPoolExecutor
def pri():
print("Hello World!!!")
def start():
try:
while True:
pri()
except KeyBoardInterrupt:
print("YOU PRESSED CTRL+C")
with ThreadPoolExecutor(max_workers=3) as exe:
exe.submit(start)
The above code was just a demo, of how CTRL+C will not work to print the statement.
What I want is to be able to call a function is an error is present. This error detection must be from the function itself.
Another example
import socket
from concurrent.futures import ThreadPoolExecutor
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def con():
try:
s.connect((x,y))
main()
except: socket.gaierror
err()
def err():
time.sleep(1)
con()
def main():
s.send("[+] Hello")
with ThreadPoolExecutor as exe:
exe.submit(con)
Way too late to the party, but maybe it'll help someone else...
I'm pretty sure the original question was not really answered. Folks got hung up on the fact that user5327424 was using a keyboard interrupt to raise an exception when the point was that the exception (however it was caused) was not raised. For example:
import concurrent.futures
def main():
numbers = range(10)
with concurrent.futures.ThreadPoolExecutor() as executor:
results = {executor.submit(raise_my_exception, number): number for number in numbers}
def raise_my_exception(number):
print('Proof that this function is getting called. %s' % number)
raise Exception('This never sees the light of day...')
main()
When the example code above is executed, you will see the text inside the print statement displayed on the screen, but you will never see the exception. This is because the results of each thread are held in the results object. You need to iterate that object to get to your exceptions. The following example shows how to access the results.
import concurrent.futures
def main():
numbers = range(10)
with concurrent.futures.ThreadPoolExecutor() as executor:
results = {executor.submit(raise_my_exception, number): number for number in numbers}
for result in results:
# This will cause the exception to be raised (but only the first one)
print(result.result())
def raise_my_exception(number):
print('Proof that this function is getting called. %s' % number)
raise Exception('This will be raised once the results are iterated.')
main()
I'm not sure I like this behavior or not, but it does allow the threads to fully execute, regardless of the exceptions encountered inside the individual threads.
Here's a solution. I'm not sure you like it, but I can't think of any other. I've modified your code to make it work.
from concurrent.futures import ThreadPoolExecutor
import time
quit = False
def pri():
print("Hello World!!!")
def start():
while quit is not True:
time.sleep(1)
pri()
try:
pool = ThreadPoolExecutor(max_workers=3)
pool.submit(start)
while quit is not True:
print("hei")
time.sleep(1)
except KeyboardInterrupt:
quit = True
Here are the points:
When you use with ThreadPoolExecutor(max_workers=3) as exe, it waits until all tasks have been done. Have a look at Doc
If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.
You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True)
It's like calling join() on a thread.
That's why I replaced it with:
pool = ThreadPoolExecutor(max_workers=3)
pool.submit(start)
Main thread must be doing "work" to be able to catch a Ctrl+C. So you can't just leave main thread there and exit, the simplest way is to run an infinite loop
Now that you have a loop running in main thread, when you hit CTRL+C, program will enter the except KeyboardInterrupt block and set quit=True. Then your worker thread can exit.
Strictly speaking, this is only a workaround. It seems to me it's impossible to have another way for this.
Edit
I'm not sure what's bothering you, but you can catch exception in another thread without problem:
import socket
import time
from concurrent.futures import ThreadPoolExecutor
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def con():
try:
raise socket.gaierror
main()
except socket.gaierror:
print("gaierror occurred")
err()
def err():
print("err invoked")
time.sleep(1)
con()
def main():
s.send("[+] Hello")
with ThreadPoolExecutor(3) as exe:
exe.submit(con)
Output
gaierror occurred
err invoked
gaierror occurred
err invoked
gaierror occurred
err invoked
gaierror occurred
...

Make Python unittest fail on exception from any thread

I am using the unittest framework to automate integration tests of multi-threaded python code, external hardware and embedded C. Despite my blatant abuse of a unittesting framework for integration testing, it works really well. Except for one problem: I need the test to fail if an exception is raised from any of the spawned threads. Is this possible with the unittest framework?
A simple but non-workable solution would be to either a) refactor the code to avoid multi-threading or b) test each thread separately. I cannot do that because the code interacts asynchronously with the external hardware. I have also considered implementing some kind of message passing to forward the exceptions to the main unittest thread. This would require significant testing-related changes to the code being tested, and I want to avoid that.
Time for an example. Can I modify the test script below to fail on the exception raised in my_thread without modifying the x.ExceptionRaiser class?
import unittest
import x
class Test(unittest.TestCase):
def test_x(self):
my_thread = x.ExceptionRaiser()
# Test case should fail when thread is started and raises
# an exception.
my_thread.start()
my_thread.join()
if __name__ == '__main__':
unittest.main()
At first, sys.excepthook looked like a solution. It is a global hook which is called every time an uncaught exception is thrown.
Unfortunately, this does not work. Why? well threading wraps your run function in code which prints the lovely tracebacks you see on screen (noticed how it always tells you Exception in thread {Name of your thread here}? this is how it's done).
Starting with Python 3.8, there is a function which you can override to make this work: threading.excepthook
... threading.excepthook() can be overridden to control how uncaught exceptions raised by Thread.run() are handled
So what do we do? Replace this function with our logic, and voilĂ :
For python >= 3.8
import traceback
import threading
import os
class GlobalExceptionWatcher(object):
def _store_excepthook(self, args):
'''
Uses as an exception handlers which stores any uncaught exceptions.
'''
self.__org_hook(args)
formated_exc = traceback.format_exception(args.exc_type, args.exc_value, args.exc_traceback)
self._exceptions.append('\n'.join(formated_exc))
return formated_exc
def __enter__(self):
'''
Register us to the hook.
'''
self._exceptions = []
self.__org_hook = threading.excepthook
threading.excepthook = self._store_excepthook
def __exit__(self, type, value, traceback):
'''
Remove us from the hook, assure no exception were thrown.
'''
threading.excepthook = self.__org_hook
if len(self._exceptions) != 0:
tracebacks = os.linesep.join(self._exceptions)
raise Exception(f'Exceptions in other threads: {tracebacks}')
For older versions of Python, this is a bit more complicated.
Long story short, it appears that the threading nodule has an undocumented import which does something along the lines of:
threading._format_exc = traceback.format_exc
Not very surprisingly, this function is only called when an exception is thrown from a thread's run function.
So for python <= 3.7
import threading
import os
class GlobalExceptionWatcher(object):
def _store_excepthook(self):
'''
Uses as an exception handlers which stores any uncaught exceptions.
'''
formated_exc = self.__org_hook()
self._exceptions.append(formated_exc)
return formated_exc
def __enter__(self):
'''
Register us to the hook.
'''
self._exceptions = []
self.__org_hook = threading._format_exc
threading._format_exc = self._store_excepthook
def __exit__(self, type, value, traceback):
'''
Remove us from the hook, assure no exception were thrown.
'''
threading._format_exc = self.__org_hook
if len(self._exceptions) != 0:
tracebacks = os.linesep.join(self._exceptions)
raise Exception('Exceptions in other threads: %s' % tracebacks)
Usage:
my_thread = x.ExceptionRaiser()
# will fail when thread is started and raises an exception.
with GlobalExceptionWatcher():
my_thread.start()
my_thread.join()
You still need to join yourself, but upon exit, the with-statement's context manager will check for any exception thrown in other threads, and will raise an exception appropriately.
THE CODE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED
This is an undocumented, sort-of-horrible hack. I tested it on linux and windows, and it seems to work. Use it at your own risk.
I've come across this problem myself, and the only solution I've been able to come up with is subclassing Thread to include an attribute for whether or not it terminates without an uncaught exception:
from threading import Thread
class ErrThread(Thread):
"""
A subclass of Thread that will log store exceptions if the thread does
not exit normally
"""
def run(self):
try:
Thread.run(self)
except Exception as self.err:
pass
else:
self.err = None
class TaskQueue(object):
"""
A utility class to run ErrThread objects in parallel and raises and exception
in the event that *any* of them fail.
"""
def __init__(self, *tasks):
self.threads = []
for t in tasks:
try:
self.threads.append(ErrThread(**t)) ## passing in a dict of target and args
except TypeError:
self.threads.append(ErrThread(target=t))
def run(self):
for t in self.threads:
t.start()
for t in self.threads:
t.join()
if t.err:
raise Exception('Thread %s failed with error: %s' % (t.name, t.err))
I've been using the accepted answer above for a while now, but since Python 3.8 the solution doesn't work anymore because the threading module doesn't have this _format_exc import anymore.
On the other hand the threading module now has a nice way to register custom except hooks in Python 3.8 so here is a simple solution to run unit tests which assert that some exceptions are raised inside threads:
def test_in_thread():
import threading
exceptions_caught_in_threads = {}
def custom_excepthook(args):
thread_name = args.thread.name
exceptions_caught_in_threads[thread_name] = {
'thread': args.thread,
'exception': {
'type': args.exc_type,
'value': args.exc_value,
'traceback': args.exc_traceback
}
}
# Registering our custom excepthook to catch the exception in the threads
threading.excepthook = custom_excepthook
# dummy function that raises an exception
def my_function():
raise Exception('My Exception')
# running the funciton in a thread
thread_1 = threading.Thread(name='thread_1', target=my_function, args=())
thread_1.start()
thread_1.join()
assert 'thread_1' in exceptions_caught_in_threads # there was an exception in thread 1
assert exceptions_caught_in_threads['thread_1']['exception']['type'] == Exception
assert str(exceptions_caught_in_threads['thread_1']['exception']['value']) == 'My Exception'

Python 3: Using a multiprocessing queue for logging

I've recently been given the challenge of working multiprocessing into our software. I want a main process to spawn subprocesses, and I need some way of sending logging information back to the main process. This is mainly because a module we use writes warning and error messages to a logging object, and we want these messages to appear in the gui, which runs in the main process.
The obvious approach was to write a small class with a write() method that puts() onto a queue, and then use this class in a logging stream handler. The main process would then get() from this queue to send the text to the gui. But this didn't seem to work, and I don't know why
I wrote some sample code to demonstrate the problem. It uses a logging object to write a queue in a subprocess, and then the main process tries to read from the queue, but fails. Can someone help me figure out what is wrong with this?
import time, multiprocessing, queue, logging
class FileLikeQueue:
"""A file-like object that writes to a queue"""
def __init__(self, q):
self.q = q
def write(self, t):
self.q.put(t)
def flush(self):
pass
def func(q):
"""This function just writes the time every second for five
seconds and then returns. The time is sent to the queue and
to a logging object"""
stream = FileLikeQueue(q)
log = logging.getLogger()
infohandler = logging.StreamHandler(stream)
infohandler.setLevel(logging.INFO)
infoformatter = logging.Formatter("%(message)s")
infohandler.setFormatter(infoformatter)
log.addHandler(infohandler)
t1 = time.time()
while time.time() - t1 < 5: #run for five seconds
log.info('Logging: ' + str(time.time()))
q.put('Put: %s' % str(time.time()))
time.sleep(1)
def main():
q = multiprocessing.Queue()
p = multiprocessing.Process(target=func, args=(q,))
p.start()
#read the queue until it is empty
while True:
try:
t = q.get()
except queue.Empty:
break
print(t)
if __name__ == '__main__':
main()
I expect the output to be:
Logging: 1333629221.01
Put: 1333629221.01
Logging: 1333629222.02
Put: 1333629222.02
Logging: 1333629223.02
Put: 1333629223.02
Logging: 1333629224.02
Put: 1333629224.02
Logging: 1333629225.02
Put: 1333629225.02
But what I get is:
Put: 1333629221.01
Put: 1333629222.02
Put: 1333629223.02
Put: 1333629224.02
Put: 1333629225.02
So the put() operation in func() works, but the logging doesn't. Why?
Thank you.
Your problem is with the configuration of the logging module:
You need to call log.setLevel(logging.INFO). The default log level is WARNING, so your logs have no effect.
You did call setLevel on the handler object, but the logged messages never reach the handler because they are filtered by the logger. There is no need to call setLevel on the handler itself, because it processes all messages by default.

Categories

Resources