How to repeat the body of a with-statement in Python? - python

I want to implement a way to repeat a section of code as many times as it's needed using a context manager only, because of its pretty syntax. Like this:
with try_until_success(attempts=10):
command1()
command2()
command3()
The commands must be executed once if no errors happen. And they should be executed again if an error occurred, until 10 attempts has passed, if so the error must be raised. For example, it can be useful to reconnect to a data base. The syntax I represented is literal, I do not want to modify it (so do not suggest me to replace it with a kind of for of while statements).
Is there a way to implement try_until_success in Python to do what I want?
What I tried is:
from contextlib import contextmanager
#contextmanager
def try_until_success(attempts=None):
counter = 0
while True:
try:
yield
except Exception as exc:
pass
else:
break
counter += 1
if attempts is not None and counter >= attempts:
raise exc
And this gives me the error:
RuntimeError: generator didn't stop after throw()
I know, there are many ways to reach what I need using a loop instead of with-statement or with the help of a decorator. But both have syntax disadvantages. For example, in case of a loop I have to insert try-except block, and in case of a decorator I have to define a new function.
I have already looked at the questions:
How do I make a contextmanager with a loop inside?
Conditionally skipping the body of Python With statement
They did not help in my question.

The problem is that the body of the with statement does not run within the call to try_until_success. That function returns an object with a __enter__ method; that __enter__ method calls and returns, then the body of the with statement is executed. There is no provision for wrapping the body in any kind of loop that would allow it to be repeated once the end of the with statement is reached.

This goes against how context managers were designed to work, you'd likely have to resort to non-standard tricks like patching the bytecode to do this.
See the official docs on the with statement and the original PEP 343 for how they are expanded. It might help you understand why this isn't going to be officially supported, and maybe why other commenters are generally saying this is a bad thing to try and do.
As an example of something that might work, maybe try:
class try_until_success:
def __init__(self, attempts):
self.attempts = attempts
self.attempt = 0
self.done = False
self.failures = []
def __iter__(self):
while not self.done and self.attempt < self.attempts:
i = self.attempt
yield self
assert i != self.attempt, "attempt not attempted"
if self.done:
return
if self.failures:
raise Exception("failures occurred", self.failures)
def __enter__(self):
self.attempt += 1
def __exit__(self, _ext, exc, _tb):
if exc:
self.failures.append(exc)
return True
self.done = True
for attempt in try_until_success(attempts=10):
with attempt:
command1()
command2()
command3()
you'd probably want to separate out the context manager from the iterator (to help prevent incorrect usage) but it sort of does something similar to what you were after

Is there a way to implement try_until_success in Python to do what I
want?
Yes. You don't need to make it a context manager. Just make it a function accepting a function:
def try_until_success(command, attempts=1):
for _ in range(attempts):
try:
return command()
except Exception as exc:
err = exc
raise err
And then the syntax is still pretty clear, no for or while statements - not even with:
attempts = 10
try_until_success(command1, attempts)
try_until_success(command2, attempts)
try_until_success(command3, attempts)

Related

Try / Except all Python errors of a certain type

In my Python code, I'm using the PyQt5 module to display a GUI. Sometimes, I encounter a Runtime Error if I delete an element then attempt to use a function on the element's instance. This error is only displayed in the console, and it does not actually interfere with the GUI or terminate it.
Regardless, I would like to remove it (the Runtime Errors). My first thought was to use a try/except block on the code, and except the Runtime Error that I was talking about. The problem with this, is that if I encase my whole code with the try/except block, then if the error is caught, it will skip over to the end of my program and terminate:
try:
# If any errors occur here...
<code>
except RuntimeError:
# The GUI will stop entirely, and the Python interpreter will skip to this line
pass
Another solution to my problem is to encase any instance which could throw a RuntimeError with a try/catch block, like so:
try:
# If any errors occur here...
<code that may print an error>
except RuntimeError:
# Python wont display the error
pass
However given the mass amount of times I would need to do this in my code, I was wondering if there was a more efficient way of fixing this problem.
As per my comment, I would definitely go with the catch in the specific line calls that might throw the runtime error. This avoids you accidentally suppressing another error you were not anticipating.
Depending on what the calls are that might give the runtime error, I would prefer a decorator pattern to hide the try-except logic. Something like:
from functools import wraps
def catch_runtime_error(func):
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except RuntimeError:
pass # or whatever handle you fancy
return wrapper
which you would then use like:
#catch_runtime_error
def the_function_that_raises(...):
# whatever the body is
the_function_that_raises(...)
Alternatively you can use it more directly in your code:
def the_function_that_raises(...):
# whatever the body is
catch_runtime_error(the_function_that_raises)(...)
You can use customs decorators to to this (I don't know your python level but it's not the most beginner friendly thing). For example, this code here do not raise any error:
from functools import wraps
def IgnoreError(f):
#wraps(f)
def wrapper():
try:
f()
except ZeroDivisionError:
pass
return wrapper
#IgnoreError
def func1():
x = 5/0
func1()
In your case, you will have to define this function:
def IgnoreError(f):
def wrapper(*args, **kwargs):
try:
f(*args, **kwargs)
except RuntimeError:
pass
return wrapper
and then anytime you create a function that may raise the RuntimeError, just put the decorator #IgnoreError before your definition like this:
#IgnoreError
def func():
<your code here>
(if you want here's a video from TechWithTim explaining the decorators)

How could I pass block to a function in Python which is like the way to pass block in Ruby

In Ruby, I can pass a block of code to a method.
For example, I can pass different code blocks to get_schedules_with_retries method.
And invoke the block by calling black.call
I'd like to know how could I implement that logic in Python,
Because I have lots of code blocks, need retry pattern.
I don't like copy paste the retry logic in many code blocks
Example:
def get_schedules_with_retries(&block)
max_retry_count = 3
retry_count = 0
while (retry_count < max_retry_count)
begin
schedules = get_more_raw_schedules
block.call(schedules)
rescue Exception => e
print_error(e)
end
if schedules.count > 0
break
else
retry_count+=1
end
end
return schedules
end
get_schedules_with_retries do |schedules|
# do something here
end
get_schedules_with_retries do |schedules|
# do another thing here
end
In Python, a block is a syntactic feature (an indentation under block opening statements like if or def) and not an object. The feature you expect may be a closure (which can access variables outside of the block), which you can achieve using inner functions, but any callable could be used. Because of how lambda works in Python, the inline function definition you've shown with do |arg| is limited to a single expression.
Here's a rough rewrite of your sample code in Python.
def get_schedules_with_retries(callable, max_retry_count = 3):
retry_count = 0
while retry_count < max_retry_count:
schedules = get_more_raw_schedules()
try:
callable(schedules)
except: # Note: could filter types, bind name etc.
traceback.print_exc()
if schedules.count > 0:
break
else:
retry_count+=1
return schedules
get_schedules_with_retries(lambda schedules: single_expression)
def more_complex_function(schedules):
pass # do another thing here
get_schedules_with_retries(more_complex_function)
One variant uses a for loop to make it clear the loop is finite:
def call_with_retries(callable, args=(), tries=3):
for attempt in range(tries):
try:
result=callable(*args)
break
except:
traceback.print_exc()
continue
else: # break never reached, so function always failed
raise # Reraises the exception we printed above
return result
Frequently when passing callables like this, you'll already have the function you want available somewhere and won't need to redefine it. For instance, methods on objects (bound methods) are perfectly valid callables.
You could do it like this:
def codeBlock(paramter1, parameter2):
print("I'm a code block")
def passMeABlock(block, *args):
block(*args)
#pass the block like this
passMeABlock(codeBlock, 1, 2)
You do so by defining a function, either by using the def statement or a lambda expression.
There are other techniques however, that may apply here. If you need to apply common logic to the input or output of a function, write a decorator. If you need to handle exceptions in a block of code, perhaps creating a context manager is applicable.

How to prevent try catching every possible line in python?

I got many lines in a row which may throw an exception, but no matter what, it should still continue the next line. How to do this without individually try catching every single statement that may throw an exception?
try:
this_may_cause_an_exception()
but_I_still_wanna_run_this()
and_this()
and_also_this()
except Exception, e:
logging.exception('An error maybe occured in one of first occuring functions causing the others not to be executed. Locals: {locals}'.format(locals=locals()))
Let's see above code, all functions may throw exceptions, but it should still execute the next functions no matter if it threw an exception or not. Is there a nice way of doing that?
I dont wanna do this:
try:
this_may_cause_an_exception()
except:
pass
try:
but_I_still_wanna_run_this()
except:
pass
try:
and_this()
except:
pass
try:
and_also_this()
except:
pass
I think code should still continue to run after an exception only if the exception is critical (The computer will burn or the whole system will get messed up, it should stop the whole program, but for many small things also exceptions are thrown such as connection failed etc.)
I normally don't have any problems with exception handling, but in this case I'm using a 3rd party library which easily throws exceptions for small things.
After looking at m4spy's answer, i thought wouldn't it be possible, to have a decorator which will let every line in the function execute even if one of them raises an exception.
Something like this would be cool:
def silent_log_exceptions(func):
#wraps(func)
def _wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
logging.exception('...')
some_special_python_keyword # which causes it to continue executing the next line
return _wrapper
Or something like this:
def silent_log_exceptions(func):
#wraps(func)
def _wrapper(*args, **kwargs):
for line in func(*args, **kwargs):
try:
exec line
except Exception:
logging.exception('...')
return _wrapper
#silent_log_exceptions
def save_tweets():
a = requests.get('http://twitter.com)
x = parse(a)
bla = x * x
for func in [this_may_cause_an_exception,
but_I_still_wanna_run_this,
and_this,
and_also_this]:
try:
func()
except:
pass
There are two things to notice here:
All actions you want to perform have to represented by callables with the same signature (in the example, callables that take no arguments). If they aren't already, wrap them in small functions, lambda expressions, callable classes, etc.
Bare except clauses are a bad idea, but you probably already knew that.
An alternative approach, that is more flexible, is to use a higher-order function like
def logging_exceptions(f, *args, **kwargs):
try:
f(*args, **kwargs)
except Exception as e:
print("Houston, we have a problem: {0}".format(e))
I ran into something similar, and asked a question on SO here. The accepted answer handles logging, and watching for only a specific exception. I ended up with a modified version:
class Suppressor:
def __init__(self, exception_type, l=None):
self._exception_type = exception_type
self.logger = logging.getLogger('Suppressor')
if l:
self.l = l
else:
self.l = {}
def __call__(self, expression):
try:
exec expression in self.l
except self._exception_type as e:
self.logger.debug('Suppressor: suppressed exception %s with content \'%s\'' % (type(self._exception_type), e))
Usable like so:
s = Suppressor(yourError, locals())
s(cmdString)
So you could set up a list of commands and use map with the suppressor to run across all of them.
You can handle such a task with a decorator:
import logging
from functools import wraps
def log_ex(func):
#wraps(func)
def _wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
logging.exception('...')
return _wrapper
#log_ex
def this_may_cause_an_exception():
print 'this_may_cause_an_exception'
raise RuntimeError()
#log_ex
def but_i_wanna_run_this():
print 'but_i_wanna_run_this'
def test():
this_may_cause_an_exception()
but_i_wanna_run_this()
Calling the test function will look like (which will show that both functions were executed):
>>> test()
this_may_cause_an_exception
ERROR:root:...
Traceback (most recent call last):
File "<stdin>", line 5, in _wrapper
File "<stdin>", line 4, in my_func
RuntimeError
but_i_wanna_run_this
Sometimes, when language misses to support your elegant way of expressing an idea because language development literally failed the last decades, you can only rely on the fact that Python is still a dynamical language which supports the exec statement, which makes the following possible:
code="""
for i in range(Square_Size):
Square[i,i] #= 1
Square[i+1,i] #= 2
#dowhatever()
"""
This new operator makes code more pythonic and elegant since you don't need to specify additional if-statemens that guarantee that the index stays in bound or the function does succeed which is totally irrelevant to what we want to express (it just shouldn't stop) here (note: while safe indexing would be possible by creating a class based on the list class, this operator works whenever there should be a try catch) , in Lisp it would be easy to define it in a Lispy way, but it seams to be impossible to define it in an elegant way in Python, but still, here is the little preparser which will make it possible:
exec "\n".join([o+"try: "+z.replace("#","")+"\n"+o+"except: pass" if "#" in z else z for z in code.split("\n") for o in ["".join([h for h in z if h==" "])]]) #new <- hackish operator which wraps try catch into line
The result, assuming that Square was 4x4 and contained only zeros:
[1 0 0 0]
[2 1 0 0]
[0 2 1 0]
[0 0 2 1]
Relevant: The Sage / Sagemath CAS uses a preparse-function
which transforms code before it reaches the Python interpreter.
A monkey-patch for that function would be:
def new_preparse(code,*args, **kwargs):
code="\n".join([o+"try: "+z.replace("#","")+"\n"+o+"except: pass" if "#" in z else z for z in code.split("\n") for o in ["".join([h for h in z if h==" "])]])
return preparse(code)
sage.misc.preparser.preparse=new_preparse
Apart from the answers provided, I think its worth to note that one-line try-except statements have been proposed - see the related PEP 463 with the unfortunate rejection notice:
""" I want to reject this PEP. I think the proposed syntax is acceptable given the
desired semantics, although it's still a bit jarring. It's probably no worse than the
colon used with lambda (which echoes the colon used in a def just like the colon here
echoes the one in a try/except) and definitely better than the alternatives listed.
But the thing I can't get behind are the motivation and rationale. I don't think that
e.g. dict.get() would be unnecessary once we have except expressions, and I disagree
with the position that EAFP is better than LBYL, or "generally recommended" by Python.
(Where do you get that? From the same sources that are so obsessed with DRY they'd rather
introduce a higher-order-function than repeat one line of code? :-)
This is probably the most you can get out of me as far as a pronouncement. Given that
the language summit is coming up I'd be happy to dive deeper in my reasons for rejecting
it there (if there's demand).
I do think that (apart from never explaining those dreadful acronyms :-) this was a
well-written and well-researched PEP, and I think you've done a great job moderating the
discussion, collecting objections, reviewing alternatives, and everything else that is
required to turn a heated debate into a PEP. Well done Chris (and everyone who
helped), and good luck with your next PEP! """
try:
this_may_cause_an_exception()
except:
logging.exception('An error occured')
finally:
but_I_still_wanna_run_this()
and_this()
and_also_this()
You can use the finally block of exception handling. It is actually meant for cleanup code though.
EDIT:
I see you said all of the functions can throw exceptions, in which case larsmans' answer is about the cleanest I can think of to catch exception for each function call.

Can I use python with statement for conditional execution?

I'm trying to write code that supports the following semantics:
with scope('action_name') as s:
do_something()
...
do_some_other_stuff()
The scope, among other things (setup, cleanup) should decide if this section should run.
For instance, if the user configured the program to bypass 'action_name' than, after Scope() is evaluated do_some_other_stuff() will be executed without calling do_something() first.
I tried to do it using this context manager:
#contextmanager
def scope(action):
if action != 'bypass':
yield
but got RuntimeError: generator didn't yield exception (when action is 'bypass').
I am looking for a way to support this without falling back to the more verbose optional implementation:
with scope('action_name') as s:
if s.should_run():
do_something()
...
do_some_other_stuff()
Does anyone know how I can achieve this?
Thanks!
P.S. I am using python2.7
EDIT:
The solution doesn't necessarily have to rely on with statements. I just didn't know exactly how to express it without it. In essence, I want something in the form of a context (supporting setup and automatic cleanup, unrelated to the contained logic) and allowing for conditional execution based on parameters passed to the setup method and selected in the configuration.
I also thought about a possible solution using decorators. Example:
#scope('action_name') # if 'action_name' in allowed actions, do:
# setup()
# do_action_name()
# cleanup()
# otherwise return
def do_action_name()
do_something()
but I don't want to enforce too much of the internal structure (i.e., how the code is divided to functions) based on these scopes.
Does anybody have some creative ideas?
You're trying to modify the expected behaviour of a basic language construct. That's never a good idea, it will just lead to confusion.
There's nothing wrong with your work-around, but you can simplify it just a bit.
#contextmanager
def scope(action):
yield action != 'bypass'
with scope('action_name') as s:
if s:
do_something()
...
do_some_other_stuff()
Your scope could instead be a class whose __enter__ method returns either a useful object or None and it would be used in the same fashion.
The following seems to work:
from contextlib import contextmanager
#contextmanager
def skippable():
try:
yield
except RuntimeError as e:
if e.message != "generator didn't yield":
raise
#contextmanager
def context_if_condition():
if False:
yield True
with skippable(), context_if_condition() as ctx:
print "won't run"
Considerations:
needs someone to come up with better names
context_if_condition can't be used without skippable but there's no way to enforce that/remove the redundancy
it could catch and suppress the RuntimeError from a deeper function than intended (a custom exception could help there, but that makes the whole construct messier still)
it's not any clearer than just using #Mark Ransom's version
I don't think this can be done. I tried implementing a context manager as a class and there's just no way to force the block to raise an exception which would subsequently be squelched by the __exit__() method.
I have the same use case as you, and came across the conditional library that someone has helpfully developed in the time since you posted your question.
From the site, its use is as:
with conditional(CONDITION, CONTEXTMANAGER()):
BODY()

Nice exception handling when re-trying code

I have some test cases. The test cases rely on data which takes time to compute. To speed up testing, I've cached the data so that it doesn't have to be recomputed.
I now have foo(), which looks at the cached data. I can't tell ahead of time what it will look at, as that depends a lot on the test case.
If a test case fails cause it doesn't find the right cached data, I don't want it to fail - I want it to compute the data and then try again. I also don't know what exception in particular it will throw cause of missing data.
My code right now looks like this:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
try:
foo()
except:
if not dataComputed:
calculateData()
dataComputed = True
try:
foo()
except:
#error handling code
else:
#the same error handling code
What's the best way to re-structure this code?
I disagree with the key suggestion in the existing answers, which basically boils down to treating exceptions in Python as you would in, say, C++ or Java -- that's NOT the preferred style in Python, where often the good old idea that "it's better to ask forgiveness than permission" (attempt an operation and deal with the exception, if any, rather than obscuring your code's main flow and incurring overhead by thorough preliminary checks). I do agree with Gabriel that a bare except is hardly ever a good idea (unless all it does is some form of logging followed by a raise to let the exception propagate). So, say you have a tuple with all the exception types that you do expect and want to handle the same way, say:
expected_exceptions = KeyError, AttributeError, TypeError
and always use except expected_exceptions: rather than bare except:.
So, with that out of the way, one slightly less-repetitious approach to your needs is:
try:
foo1()
except expected_exceptions:
try:
if condition:
foobetter()
else:
raise
except expected_exceptions:
handleError()
A different approach is to use an auxiliary function to wrap the try/except logic:
def may_raise(expected_exceptions, somefunction, *a, **k):
try:
return False, somefunction(*a, **k)
except expected_exceptions:
return True, None
Such a helper may often come in useful in several different situations, so it's pretty common to have something like this somewhere in a project's "utilities" modules. Now, for your case (no arguments, no results) you could use:
failed, _ = may_raise(expected_exceptions, foo1)
if failed and condition:
failed, _ = may_raise(expected_exceptions, foobetter)
if failed:
handleError()
which I would argue is more linear and therefore simpler. The only issue with this general approach is that an auxiliary function such as may_raise does not FORCE you to deal in some way or other with exceptions, so you might just forget to do so (just like the use of return codes, instead of exceptions, to indicate errors, is prone to those return values mistakenly being ignored); so, use it sparingly...!-)
Using blanket exceptions isn't usually a great idea. What kind of Exception are you expecting there? Is it a KeyError, AttributeError, TypeError...
Once you've identified what type of error you're looking for you can use something like hasattr() or the in operator or many other things that will test for your condition before you have to deal with exceptions.
That way you can clean up your logic flow and save your exception handling for things that are really broken!
Sometimes there's no nice way to express a flow, it's just complicated. But here's a way to call foo() in only one place, and have the error handling in only one place:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
while True:
try:
foo()
break
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
break
You may not like the loop, YMMV...
Or:
if cacheExists:
loadCache()
dataComputed = False
else:
calculateData()
dataComputed = True
done = False
while !done:
try:
foo()
done = True
except:
if not dataComputed:
calculateData()
dataComputed = True
continue
else:
#the error handling code
done = True
I like the alternative approach proposed by Alex Martelli.
What do you think about using a list of functions as argument of the may_raise. The functions would be executed until one succeed!
Here is the code
def foo(x):
raise Exception("Arrrgh!")
return 0
def foobetter(x):
print "Hello", x
return 1
def try_many(functions, expected_exceptions, *a, **k):
ret = None
for f in functions:
try:
ret = f(*a, **k)
except expected_exceptions, e:
print e
else:
break
return ret
print try_many((foo, foobetter), Exception, "World")
result is
Arrrgh!
Hello World
1
Is there a way to tell if you want to do foobetter() before making the call? If you get an exception it should be because something unexpected (exceptional!) happened. Don't use exceptions for flow control.

Categories

Resources