Python: remember status in sequence of procedures - python

First of all, sorry for the wording of the question, I can't express it in a more compact form.
Let's say I have a code like this in Python:
something_happened = False
def main():
# 'procs' is a list of procedures
for proc in procs:
try:
# Any of these can set the 'something_happened'
# global var to True
proc()
except as e:
handle_unexpected_exception(e)
continue
# If some procedure found some problem,
# print a remainder to check the logging files
if something_happened:
print('Check the logfile, just in case.')
Any of the involved procedures may encounter some problem but execution MUST continue, the problem is properly logged and that's the ONLY handling needed, really, because the problems that may arise while running the procedures shouldn't stop the program, this shouldn't involve raising an exception and stopping the execution.
The reason why the logfile should be checked is that some of the problems may need further human action, but the program can't do anything about them, other than logging them and keep running (long story).
Right now the only way of achieving this that I can think about is to make each procedure to set something_happened == True after logging a potential problem, but using a global variable which may be set from any of the procedures, or returning a status code from the procedures.
And yes, I know I can raise an exception from the procedures instead of setting a global or returning an error code, but that would only work because I'm running them in a loop, and this may change in the future (and then raising an exception will jump out the try-block), so that's my last resort.
Can anyone suggest a better way of dealing with this situation? Yes, I know, this is a very particular use case, but that's the reason why I'm not raising an exception in the first place, and I'm just curious because I didn't find anything after googling for hours...
Thanks in advance :)

You have a variable that may be set to True by any of the procs. It looks like a common OOP schema:
class A():
"""Don't do that"""
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def proc1(self):
try:
...
except KeyError as e:
self._something_happened = True
self._logger.log(...)
def proc2(self):
...
def execute(self):
for proc in [self.proc1, self.proc2, ...]:
try:
proc()
except as e:
self._handle_unexpected_exception(e)
continue
if self._something_happened:
print('Check the logfile, just in case.')
But that's a very bad idea, because you're violating the Single Responsibility Principle: your classs has to know about proc1, proc2, ... You have to reverse the idea:
class Context:
def __init__(self):
self.something_happened = False
def main():
ctx = Context()
for proc in procs:
try:
proc(ctx) # proc may set ctx.something_happened to True
except as e:
handle_unexpected_exception(e)
continue
if ctx.something_happened:
print('Check the logfile, just in case.')
Creating a void class like that is not attracting. You can take the idea further:
class Context:
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def handle_err(self, e):
self._something_happened = True
self._logger.log(...)
def handle_unexpected_exception(self, e):
...
self._logger.log(...)
def after(self):
if self._something_happened:
print('Check the logfile, just in case.')
def proc1(ctx):
try:
...
except KeyError as e:
ctx.handle_err(e) # you delegate the error handling to ctx
def proc2(ctx):
...
def main():
ctx = Context(logging.gerLogger("main"))
for proc in procs:
try:
proc(ctx)
except as e:
ctx.handle_unexpected_exception(e)
ctx.after()
The main benefit here is you that can use another Context if you want:
def StrictContext():
def handle_err(self, e):
raise e
def handle_unexpected_exception(self, e):
raise e
def after(self):
pass
Or
class LooseContext:
def handle_err(self, e):
pass
def handle_unexpected_exception(self, e):
pass
def after(self):
pass
Or whatever you need.

Looks like the cleaner solution is to raise an exception, and I will change the code accordingly. They only problem is what will happen if in the future the loop goes away, but I suppose I'll cross that bridge when I arrive to it ;) and then I'll use another solution or I'll try to change the main code miself.
#cglacet, #Phydeaux, thanks for your help and suggestions.

Related

ExitStack within classes

I would like to understand why using the following snippet leads me to an error:
a) I want to use the following class to create a context manager, as outlined in the link attached below: for me it is very important to keep the "class PrintStop(ExitStack)" form, so please bear in mind when trying to solve this issue, that I already know there are other ways to use ExitStack(), but I am interested in this specific way of using it:
class PrintStop(ExitStack):
def __init__(self, verbose: bool = False):
super().__init__()
self.verbose = verbose
def __enter__(self):
super().__enter__()
if not self.verbose:
sys.stdout = self.enter_context(open(os.devnull, 'w'))
b) when trying to use the class in the more appropriate way, I get the desired effect to stop all the printing within the "with" block, but when trying to print again after that block I get an error:
with PrintStop(verbose=False):
print('this shouldn't be printed') <------ok till here
print('this should be printed again as it is outside the with block) <-----ERROR
c) the error I get is "ValueError: I/O operation on closed file": the reason I guess is the fact that exit method of ExitStack() is not automatically called once we exit the 'with' block, so, how may I change the class to fix this bug?
Here is a quick reference to a similar topic,
Pythonic way to compose context managers for objects owned by a class
ExitStack.__exit__ simply ensures that each context you enter has its __exit__ method called; it does not ensure that any changes you made (like assigning to sys.stdout) inside the corresponding __enter__ is undone.
Also, the purpose of an exit stack is to make it easy to enter contexts that require information not known when the with statement is introduced, or to create a variable number of contexts without having to enumerate them statically.
If you really want to use an exit stack, you'll need something like
class PrintStop(ExitStack):
def __init__(self, verbose: bool = False):
super().__init__()
self.verbose = verbose
def __enter__(self):
rv = super().__enter__()
if not self.verbose:
sys.stdout = self.enter_context(open(os.devnull, 'w'))
return rv
def __exit__(self):
sys.stdout = sys.__stdout__ # Restore the original
return super().__exit__()
Keep in mind that contextlib already provides a context manager for temporarily replacing standard output with a different file, appropriately named redirect_stdout.
with redirect_stdout(open(os.devnull, 'w')):
...
Using this as the basis for PrintStop makes use of composition, rather than inheritance.
from contextlib import redirect_stdout, nullcontext
class PrintStop:
def __init__(self, verbose: bool = False):
super().__init__()
if verbose:
self.cm = redirect_stdout(open(os.devnull, 'w'))
else:
self.cm = nullcontext()
def __enter__(self):
return self.cm.__enter__()
def __exit__(self):
return self.cm.__exit__()

Is there a way to test/check if tkinter mainloop is running?

I wonder if there is there a way to test/check if tkinter mainloop is running somehow? Issuing multiple tk.Tk().mainloop() commands will break the GUI, but I haven't found a way to check if mainloop has been started for the tk._default_root?
Annoyingly, while Tk().willdispatch() exists (though it appears undocumented) to lie and say the mainloop is running when it isn't (I've used it to allow an asyncio task to interlace asyncio events with tkinter events), there is no Python-level API to query the underlying flag (C level struct member dispatching) it sets.
The only place the flag is tested is in functions that automatically marshal calls from non-main threads to the mainloop thread, and it's not practical to use this as a mechanism of detection (it requires spawning threads solely to perform the test, involves a timeout, and throws an exception when it fails, making it ugly even if you could make it work).
In short, it's going to be on you. A solution you might use would be to make a subclass that intercepts the call to mainloop and records that it has been called:
class CheckableTk(Tk):
def __init__(self, *args, **kwargs):
self.running = False
super().__init__(*args, **kwargs)
def mainloop(self, *args, **kwargs):
self.running = True
try:
return super().mainloop(*args, **kwargs)
finally:
self.running = False
def willdispatch(self, *args, **kwargs):
self.running = True # Lie just like willdispatch lies
return super().willdispatch(*args, **kwargs)
It's not a great solution, and I'd discourage it in general. The real answer is, have one, and only one, single place that runs the mainloop, instead of splitting up the possible launch points all over your program.
Posting a threading based solution from matplotlib source code:
   
 lib/matplotlib/backends/_backend_tk.py
def is_tk_mainloop_running():
import threading
dispatching = sentinel = object()
def target():
nonlocal dispatching
try:
tk._default_root.call('while', '0', '{}')
except RuntimeError as e:
if str(e) == "main thread is not in main loop":
print('not dispatching')
dispatching = False
else:
raise
except BaseException as e:
print(e)
raise
else:
dispatching = True
print('dispatching')
t = threading.Thread(target=target, daemon=True)
t.start()
tk._default_root.update()
t.join()
if dispatching is sentinel:
raise RuntimeError('thread failed to determine dispatching value')
return dispatching
If you know the process name, in Windows, you can use psutil to check if a process is running.
Import psutill:
import psutil
build a function to check if a process is running:
def is_running(name: str) -> bool:
name = name if name.endswith('.exe') else name + '.exe'
return name in [process.name() for process in psutil.process_iter()]
or a function to kill a process:
def kill(name: str) -> bool:
name = name if name.endswith('.exe') else name + '.exe'
for process in psutil.process_iter():
if process.name() == name:
process.kill()
return True
return False
and then you can simply use them like this
if __name__ == "__main__":
app_name = 'chrome'
if is_running(name=app_name):
if kill(name=app_name):
print(f'{app_name} killed!')
else:
print(f'{app_name} is not running!')

AssertError not catched in unittest in python try-except clause

I have a object created in a test case, and want to make test inside of its method.
But the exception is swallow by the try-except clause.
I know I can change raise the exception in run but it is not what I want. Is there a way that any unittest tool can handle this?
It seems that assertTrue method of unittest.TestCase is just a trivial assert clause.
class TestDemo(unittest.TestCase):
def test_a(self):
test_case = self
class NestedProc:
def method1(self):
print("flag show the method is running")
test_case.assertTrue(False)
def run(self):
try:
self.method1()
except:
pass # can raise here to give the exception but not what I want.
NestedProc().run() # no exception raised
# NestedProc().method1() # exception raised
EDIT
For clarity, I paste my realworld test case here. The most tricky thing here is that ParentProcess will always success leading to AssertError not correctly being propagated to test function.
class TestProcess(unittest.TestCase);
#pytest.mark.asyncio
async def test_process_stack_multiple(self):
"""
Run multiple and nested processes to make sure the process stack is always correct
"""
expect_true = []
def test_nested(process):
expect_true.append(process == Process.current())
class StackTest(plumpy.Process):
def run(self):
# TODO: unexpected behaviour here
# if assert error happend here not raise
# it will be handled by try except clause in process
# is there better way to handle this?
expect_true.append(self == Process.current())
test_nested(self)
class ParentProcess(plumpy.Process):
def run(self):
expect_true.append(self == Process.current())
proc = StackTest()
# launch the inner process
asyncio.ensure_future(proc.step_until_terminated())
to_run = []
for _ in range(100):
proc = ParentProcess()
to_run.append(proc)
await asyncio.gather(*[p.step_until_terminated() for p in to_run])
for proc in to_run:
self.assertEqual(plumpy.ProcessState.FINISHED, proc.state)
for res in expect_true:
self.assertTrue(res)
Any assert* method and even fail() just raises an exception. The easiest method is probably to manually set a flag and fail() afterwards:
def test_a(self):
success = True
class NestedProc:
def method1(self):
nonlocal success
success = False
raise Exception()
...
NestedProc().run()
if not success:
self.fail()

Using context managers for recovering from celery's SoftTimeLimitExceeded

I am trying to set a maximum run time for my celery jobs.
I am currently recovering from exceptions with a context manager. I ended up with code very similar to this snippet:
from celery.exceptions import SoftTimeLimitExceeded
class Manager:
def __enter__(self):
return self
def __exit__(self, error_type, error, tb):
if error_type == SoftTimeLimitExceeded:
logger.info('job killed.')
# swallow the exception
return True
#task
def do_foo():
with Manager():
run_task1()
run_task2()
run_task3()
What I expected:
If do_foo times out in run_task1, the logger logs, the SoftTimeLimitExceeded exception is swallowed, the body of the manager is skipped, the job ends without running run_task2 and run_task3.
What I observe:
do_foo times out in run_task1, SoftTimeLimitExceeded is raised, the logger logs, the SoftTimeLimitExceeded exception is swallowed but run_task2 and run_task3 are running nevertheless.
I am looking for an answer to following two questions:
Why is run_task2 still executed when SoftTimeLimitExceeded is raised in run_task1 in this setting?
Is there an easy way to transform my code so that it can performs as expected?
Cleaning up the code
This code is pretty good; there's not much cleaning up to do.
You shouldn't return self from __enter__ if the context manager isn't designed to be used with the as keyword.
is should be used when checking classes, since they are singletons...
but you should prefer issubclass to properly emulate exception handling.
Implementing these changes gives:
from celery.exceptions import SoftTimeLimitExceeded
class Manager:
def __enter__(self):
pass
def __exit__(self, error_type, error, tb):
if issubclass(error_type, SoftTimeLimitExceeded):
logger.info('job killed.')
# swallow the exception
return True
#task
def do_foo():
with Manager():
run_task1()
run_task2()
run_task3()
Debugging
I created a mock environment for debugging:
class SoftTimeLimitExceeded(Exception):
pass
class Logger:
info = print
logger = Logger()
del Logger
def task(f):
return f
def run_task1():
print("running task 1")
raise SoftTimeLimitExceeded
def run_task2():
print("running task 2")
def run_task_3():
print("running task 3")
Executing this and then your program gives:
>>> do_foo()
running task 1
job killed.
This is the expected behaviour.
Hypotheses
I can think of two possibilities:
Something in the chain, probably run_task1, is asynchronous.
celery is doing something weird.
I'll run with the second hypothesis because I can't test the former.
I've been bitten by the obscure behaviour of a combination between context managers, exceptions and coroutines before, so I know what sorts of problems it causes. This seems like one of them, but I'll have to look at celery's code before I can go any further.
Edit: I can't make head nor tail of celery's code, and searching hasn't turned up the code that raises SoftTimeLimitExceeded to allow me to trace it backwards. I'll pass it on to somebody more experienced with celery to see if they can work out how it works.

Exceptions for the whole class

I'm writing a program in Python, and nearly every method im my class is written like this:
def someMethod(self):
try:
#...
except someException:
#in case of exception, do something here
#e.g display a dialog box to inform the user
#that he has done something wrong
As the class grows, it is a little bit annoying to write the same try-except block over and over. Is it possible to create some sort of 'global' exception for the whole class? What's the recommended way in Python to deal with this?
Write one or more exception handler functions that, given a function and the exception raised in it, does what you want to do (e.g. displays an alert). If you need more than one, write them.
def message(func, e):
print "Exception", type(e).__name__, "in", func.__name__
print str(e)
Now write a decorator that applies a given handler to a called function:
import functools
def handle_with(handler, *exceptions):
try:
handler, cleanup = handler
except TypeError:
cleanup = lambda f, e: None
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions or Exception as e:
return handler(func, e)
else:
e = None
finally:
cleanup(func, e)
return wrapper
return decorator
This only captures the exceptions you specify. If you don't specify any, Exception is caught. Additionally, the first argument can be a tuple (or other sequence) of two handler functions; the second handler, if given, is called in a finally clause. The value returned from the primary handler is returned as the value of the function call.
Now, given the above, you can write:
#handle_with(message, TypeError, ValueError)
def add(x, y):
return x + y
You could also do this with a context manager:
from contextlib import contextmanager
#contextmanager
def handler(handler, *exceptions):
try:
handler, cleanup = handler
except TypeError:
cleanup = lambda e: None
try:
yield
except exceptions or Exception as e:
handler(e)
else:
e = None
finally:
cleanup(e)
Now you can write:
def message(e):
print "Exception", type(e).__name__
print str(e)
def add(x, y):
with handler(message, TypeError, ValueError):
return x + y
Note that the context manager doesn't know what function it's in (you can find this out, sorta, using inspect, though this is "magic" so I didn't do it) so it gives you a little less useful information. Also, the context manager doesn't give you the opportunity to return anything in your handler.
I can think of two options:
Write a decorator that can wrap each method in the try block.
Write a "dispatcher" method that calls the appropriate method inside a try block, then call that method instead of the individual ones. That is, instead of calling obj.someMethod(), obj.otherMethod, you call obj.dispatch('someMethod') or obj.dispatch('otherMethod'), where dispatch is a wrapper that contains the try block.
Your approach seems like a bit of a strange design, though. It might make more sense to have the dialog-box stuff in some other part of the code, some higher-level event loop that catches errors and displays messages about them.

Categories

Resources