This question already has an answer here:
Why does my contextmanager-function not work like my contextmanager class in python?
(1 answer)
Closed 8 years ago.
Here are the relevant pieces of code:
#contextlib.contextmanager
def make_temp_dir():
temp_dir = tempfile.mkdtemp()
yield temp_dir
shutil.rmtree(temp_dir)
with make_temp_dir(listing_id) as tmpdir:
pass
# Sometimes something in here throws an exception that gets caught higher up
Ok, so writing this all out, I understand now what's happening. The exit method in the contextmanager I'm creating with the decorator is running but that doesn't, of course, return flow to my generator.
So how should I be doing this?
What happens here is the following:
On __enter__(), the generator is started. What it yields is taken for return value of __enter__().
On __exit__(), the generator is resumed, either in a normal way or by injecting an exception. The relevant code is in $PYTHONROOT/contextlib.py where you can see that either next() or throw() is called on the generator.
If throw() is called on a generator, the exception is raised inside it exactly where we left it the last time, i. e. the yield expression raises the exception then.
Thus, you will have to enclose the yield in a try: statement. Only then, you'll be able to do something with the exception.
If you fail to do so, your generator will raise the exception back without doing anything.
You probably want
#contextlib.contextmanager
def make_temp_dir():
temp_dir = tempfile.mkdtemp()
try:
yield temp_dir
finally:
shutil.rmtree(temp_dir)
Related
I want to implement a way to repeat a section of code as many times as it's needed using a context manager only, because of its pretty syntax. Like this:
with try_until_success(attempts=10):
command1()
command2()
command3()
The commands must be executed once if no errors happen. And they should be executed again if an error occurred, until 10 attempts has passed, if so the error must be raised. For example, it can be useful to reconnect to a data base. The syntax I represented is literal, I do not want to modify it (so do not suggest me to replace it with a kind of for of while statements).
Is there a way to implement try_until_success in Python to do what I want?
What I tried is:
from contextlib import contextmanager
#contextmanager
def try_until_success(attempts=None):
counter = 0
while True:
try:
yield
except Exception as exc:
pass
else:
break
counter += 1
if attempts is not None and counter >= attempts:
raise exc
And this gives me the error:
RuntimeError: generator didn't stop after throw()
I know, there are many ways to reach what I need using a loop instead of with-statement or with the help of a decorator. But both have syntax disadvantages. For example, in case of a loop I have to insert try-except block, and in case of a decorator I have to define a new function.
I have already looked at the questions:
How do I make a contextmanager with a loop inside?
Conditionally skipping the body of Python With statement
They did not help in my question.
The problem is that the body of the with statement does not run within the call to try_until_success. That function returns an object with a __enter__ method; that __enter__ method calls and returns, then the body of the with statement is executed. There is no provision for wrapping the body in any kind of loop that would allow it to be repeated once the end of the with statement is reached.
This goes against how context managers were designed to work, you'd likely have to resort to non-standard tricks like patching the bytecode to do this.
See the official docs on the with statement and the original PEP 343 for how they are expanded. It might help you understand why this isn't going to be officially supported, and maybe why other commenters are generally saying this is a bad thing to try and do.
As an example of something that might work, maybe try:
class try_until_success:
def __init__(self, attempts):
self.attempts = attempts
self.attempt = 0
self.done = False
self.failures = []
def __iter__(self):
while not self.done and self.attempt < self.attempts:
i = self.attempt
yield self
assert i != self.attempt, "attempt not attempted"
if self.done:
return
if self.failures:
raise Exception("failures occurred", self.failures)
def __enter__(self):
self.attempt += 1
def __exit__(self, _ext, exc, _tb):
if exc:
self.failures.append(exc)
return True
self.done = True
for attempt in try_until_success(attempts=10):
with attempt:
command1()
command2()
command3()
you'd probably want to separate out the context manager from the iterator (to help prevent incorrect usage) but it sort of does something similar to what you were after
Is there a way to implement try_until_success in Python to do what I
want?
Yes. You don't need to make it a context manager. Just make it a function accepting a function:
def try_until_success(command, attempts=1):
for _ in range(attempts):
try:
return command()
except Exception as exc:
err = exc
raise err
And then the syntax is still pretty clear, no for or while statements - not even with:
attempts = 10
try_until_success(command1, attempts)
try_until_success(command2, attempts)
try_until_success(command3, attempts)
This question already has answers here:
Break or exit out of "with" statement?
(13 answers)
Closed 2 years ago.
I have a With statement that I'd like to skip if some <condition> is satisfied. That is, I write:
With MyContext() as mc:
do_something()
and
class MyContext(object):
...
def __enter__(self,...):
if <condition>:
JumpToExit()
def __exit__(self,...):
print('goodbye')
I would like do_something() to be executed only on certain conditions, otherwise I'd like JumpToExit() to skip the body entirely and just finish the block.
Thanks.
This is impossible. A with statement cannot cancel the block. However, you could do something like this:
def __enter__(self):
return condition
Then, when you use the context:
with context as condition:
if condition:
....
If you want to return something else as well you can return a tuple like return condition, self then to use it you can unpack the tuple like with condition, context:.
I am not sure that is possible. __enter__ is executed outside of the try block introduced by the with statement, so I don't see a way of jumping directly from __enter__ into __exit__.
with basically (simplified) turns this:
with context as x:
do_something()
Into
x = context.__enter__()
try:
do_something()
finally:
context.__exit__()
You need to throw an exception in do_something() or successfully complete it to get into __exit__. If you do throw an exception from do_something() you can do tricky stuff like suppressing it in the exit function (using some of the parameters passed to it), so you don't actually see it. But it has to be the code inside the with block which somehow causes the jump into __exit__.
Maybe if you can somehow ensure that do_something immediately throws an exception by setting some value in __enter__ you can make it work. But that does not sound like a very good idea to me.
You could put the with statement in a function and use return:
def foo():
With MyContext() as mc:
if <condition>:
return None
do_something()
I have a scenario with Tornado where I have a coroutine that is called from a non-coroutine or without yielding, yet I need to propagate the exception back.
Imagine the following methods:
#gen.coroutine
def create_exception(with_yield):
if with_yield:
yield exception_coroutine()
else:
exception_coroutine()
#gen.coroutine
def exception_coroutine():
raise RuntimeError('boom')
def no_coroutine_create_exception(with_yield):
if with_yield:
yield create_exception(with_yield)
else:
create_exception(with_yield)
Calling:
try:
# Throws exception
yield create_exception(True)
except Exception as e:
print(e)
will properly raise the exception. However, none of the following raise the exception :
try:
# none of these throw the exception at this level
yield create_exception(False)
no_coroutine_create_exception(True)
no_coroutine_create_exception(False)
except Exception as e:
print('This is never hit)
The latter are variants similar to what my problem is - I have code outside my control calling coroutines without using yield. In some cases, they are not coroutines themselves. Regardless of which scenario, it means that any exceptions they generate are swallowed until Tornado returns them as "future exception not received."
This is pretty contrary to Tornado's intent, their documentation basically states you need to do yield/coroutine through the entire stack in order for it to work as I'm desiring without hackery/trickery.
I can change the way the exception is raised (ie modify exception_coroutine). But I cannot change several of the intermediate methods.
Is there something I can do in order to force the exception to be raised throughout the Tornado stack, even if it is not properly yielded? Basically to properly raise the exception in all of the last three situations?
This is complicated because I cannot change the code that is causing this situation. I can only change exception_coroutine for example in the above.
What you're asking for is impossible in Python because the decision to yield or not is made by the calling function after the coroutine has finished. The coroutine must return without raising an exception so it can be yielded, and after that it is no longer possible for it to raise an exception into the caller's context in the event that the Future is not yielded.
The best you can do is detect the garbage collection of a Future, but this can't do anything but log (this is how the "future exception not retrieved" message works)
If you're curious why this isn't working, it's because no_coroutine_create_exception contains a yield statement. Therefore it's a generator function, and calling it does not execute its code, it only creates a generator object:
>>> no_coroutine_create_exception(True)
<generator object no_coroutine_create_exception at 0x101651678>
>>> no_coroutine_create_exception(False)
<generator object no_coroutine_create_exception at 0x1016516d0>
Neither of the calls above executes any Python code, it only creates generators that must be iterated.
You'd have to make a blocking function that starts the IOLoop and runs it until your coroutine finishes:
def exception_blocking():
return ioloop.IOLoop.current().run_sync(exception_coroutine)
exception_blocking()
(The IOLoop acts as a scheduler for multiple non-blocking tasks, and the gen.coroutine decorator is responsible for iterating the coroutine until completion.)
However, I think I'm likely answering your immediate question but merely enabling you to proceed down an unproductive path. You're almost certainly better off using async code or blocking code throughout instead of trying to mix them.
This question already has answers here:
What is the python "with" statement designed for?
(11 answers)
Closed 9 years ago.
I am new to Python. In one tutorial of connecting to mysql and fetching data, I saw the with statement. I read about it and it was something related to try-finally block. But I couldn't find a simpler explanation that I could understand.
with statements open a resource and guarantee that the resource will be closed when the with block completes, regardless of how the block completes. Consider a file:
with open('/etc/passwd', 'r') as f:
print f.readlines()
print "file is now closed!"
The file is guaranteed to be closed at the end of the block -- even if you have a return, even if you raise an exception.
In order for with to make this guarantee, the expression (open() in the example) must be a context manager. The good news is that many python expressions are context managers, but not all.
According to a tutorial I found, MySQLdb.connect() is, in fact, a context manager.
This code:
conn = MySQLdb.connect(...)
with conn:
cur = conn.cursor()
cur.do_this()
cur.do_that()
will commit or rollback the sequence of commands as a single transaction. This means that you don't have to worry so much about exceptions or other unusual code paths -- the transaction will be dealt with no matter how you leave the code block.
Fundamentally it's a object that demarcates a block of code with custom logic that is called on entrance and exit and can take arguments in it's construction. You can define a custom context manager with a class:
class ContextManager(object):
def __init__(self, args):
pass
def __enter__(self):
# Entrance logic here, called before entry of with block
pass
def __exit__(self, exception_type, exception_val, trace):
# Exit logic here, called at exit of with block
return True
The entrance then gets passed an instance of the contextmanager class and can reference anything created in the __init__ method (files, sockets, etc). The exit method also receives any exceptions raised in the internal block and the stack trace object or Nones if the logic completed without raising.
We could then use it like so:
with ContextManager(myarg):
# ... code here ...
This is useful for many things like managing resource lifetimes, freeing file descriptors, managing exceptions and even more complicated uses like building embedded DSLs.
An alternative (but equivalent) method of construction is to the contextlib decorator which uses a generator to separate the entrance and exit logic.
from contextlib import contextmanager
#contextmanager
def ContextManager(args):
# Entrance logic here
yield
# Exit logic here
Think of with as creating a "supervisor" (context manager) over a code block. The supervisor can even be given a name and referenced within the block. When the code block ends, either normally or via an exception, the supervisor is notified and it can take appropriate action based on what happened.
I'm trying to write code that supports the following semantics:
with scope('action_name') as s:
do_something()
...
do_some_other_stuff()
The scope, among other things (setup, cleanup) should decide if this section should run.
For instance, if the user configured the program to bypass 'action_name' than, after Scope() is evaluated do_some_other_stuff() will be executed without calling do_something() first.
I tried to do it using this context manager:
#contextmanager
def scope(action):
if action != 'bypass':
yield
but got RuntimeError: generator didn't yield exception (when action is 'bypass').
I am looking for a way to support this without falling back to the more verbose optional implementation:
with scope('action_name') as s:
if s.should_run():
do_something()
...
do_some_other_stuff()
Does anyone know how I can achieve this?
Thanks!
P.S. I am using python2.7
EDIT:
The solution doesn't necessarily have to rely on with statements. I just didn't know exactly how to express it without it. In essence, I want something in the form of a context (supporting setup and automatic cleanup, unrelated to the contained logic) and allowing for conditional execution based on parameters passed to the setup method and selected in the configuration.
I also thought about a possible solution using decorators. Example:
#scope('action_name') # if 'action_name' in allowed actions, do:
# setup()
# do_action_name()
# cleanup()
# otherwise return
def do_action_name()
do_something()
but I don't want to enforce too much of the internal structure (i.e., how the code is divided to functions) based on these scopes.
Does anybody have some creative ideas?
You're trying to modify the expected behaviour of a basic language construct. That's never a good idea, it will just lead to confusion.
There's nothing wrong with your work-around, but you can simplify it just a bit.
#contextmanager
def scope(action):
yield action != 'bypass'
with scope('action_name') as s:
if s:
do_something()
...
do_some_other_stuff()
Your scope could instead be a class whose __enter__ method returns either a useful object or None and it would be used in the same fashion.
The following seems to work:
from contextlib import contextmanager
#contextmanager
def skippable():
try:
yield
except RuntimeError as e:
if e.message != "generator didn't yield":
raise
#contextmanager
def context_if_condition():
if False:
yield True
with skippable(), context_if_condition() as ctx:
print "won't run"
Considerations:
needs someone to come up with better names
context_if_condition can't be used without skippable but there's no way to enforce that/remove the redundancy
it could catch and suppress the RuntimeError from a deeper function than intended (a custom exception could help there, but that makes the whole construct messier still)
it's not any clearer than just using #Mark Ransom's version
I don't think this can be done. I tried implementing a context manager as a class and there's just no way to force the block to raise an exception which would subsequently be squelched by the __exit__() method.
I have the same use case as you, and came across the conditional library that someone has helpfully developed in the time since you posted your question.
From the site, its use is as:
with conditional(CONDITION, CONTEXTMANAGER()):
BODY()