Assume I have the following code:
with open('somefile.txt') as my_file:
# some processing
my_file.close()
Is my_file.close() above redundant?
Yes. Exiting the with block will close the file.
However, that is not necessarily true for objects that are not files. Normally, exiting the context should trigger an operation conceptually equivalent to "close", but in fact __exit__ can be overloaded to execute any code the object wishes.
Yes it is; Beside, it is not guarranty that your close() will always be executed. (for instance if an Exception occurs).
with open('somefile.txt') as my_file:
1/0 # raise Exception
my_file.close() # Your close() call is never going to be called
But the __exit__() function of the with statement is always executed because it follows the try...except...finally pattern.
The with statement is used to wrap the execution of a block with
methods defined by a context manager (see section With Statement
Context Managers). This allows common try...except...finally usage
patterns to be encapsulated for convenient reuse.
The context manager’s __exit__() method is invoked. If an exception
caused the suite to be exited, its type, value, and traceback are
passed as arguments to __exit__()
You can check that the file have been close right after the with statement using closed
>>> with open('somefile.txt') as f:
... pass
>>> f.closed
# True
Source for my answer:
Understanding Python's "with" statement
The with statement creates a runtime context.Python creates the stream object of the file and tells it that it is entering a runtime context. When the with code block is completed, Python tells the stream object that it is exiting the runtime context,and the stream object calls its own close() method.
yes, the with statement takes care of that
as you can see in the documentation:
The context manager’s __exit__() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to __exit__().
In the case of files, the __exit__() method will close the file
Related
could you please help me to understand what is the difference between these 2 syntaxes in Django tests (Python 3.7):
def test_updateItem_deletion(self):
# some logic in here
with self.assertRaises(OrderItem.DoesNotExist):
OrderItem.objects.get(id=self.product_1.id)
And:
# all the same, but self.assertRaises not wrapped in 'with'
self.assertRaises(OrderItem.DoesNotExist, OrderItem.objects.get(id=self.product_1.id))
The first one worked and test passed. But the second one raised:
models.OrderItem.DoesNotExist: OrderItem matching query does not exist.
Does it somehow replicate the behaviour of try/catch block?
Thank you a lot!
The first one will catch the exception raised if executed as context manager.
On the second one, nothing is catching the exception.
This is known as ContextManager. When using the with statement, an __exit__ method is called at the end of the with block, containing any exception raised during execution of the block.
This __exit__ method is not called when directly calling assertRaises, so the exception is not captured.
Here you will find more info about this:
Official Doc
Python Tips
If I am correct, with statement doesn't introduce a local scope for the with statement.
These are examples from Learning Python:
with open(r'C:\misc\data') as myfile:
for line in myfile:
print(line)
...more code here...
and
lock = threading.Lock() # After: import threading
with lock:
# critical section of code
...access shared resources...
Is the second example equivalent to the following rewritten in a way similar to the first example?
with threading.Lock() as lock:
# critical section of code
...access shared resources...
What are their differences?
Is the first example equivalent to the following rewritten in a way similar to the second example?
myfile = open(r'C:\misc\data')
with myfile:
for line in myfile:
print(line)
...more code here...
What are their differences?
When with enters a context, it calls a hook on the context manager object, called __enter__, and the return value of that hook can optionally be assigned to a name using as <name>. Many context managers return self from their __enter__ hook. If they do, then you can indeed take your pick between creating the context manager on a separate line or capturing the object with as.
Out of your two examples, only the file object returned from open() has an __enter__ hook that returns self. For threading.Lock(), __enter__ returns the same value as Lock.acquire(), so a boolean, not the lock object itself.
You'll need to look for explicit documentation that confirms this; this is not always that clear however. For Lock objects, the relevant section of the documentation states:
All of the objects provided by this module that have acquire() and release() methods can be used as context managers for a with statement. The acquire() method will be called when the block is entered, and release() will be called when the block is exited.
and for file objects, the IOBase documentation is rather on the vague side and you have to infer from the example that the file object is returned.
The main thing to take away is that returning self is not mandatory, nor is it always desired. Context managers are entirely free to return something else. For example, many database connection objects are context managers that let you manage the transaction (roll back or commit automatically, depending on whether or not there was an exception), where entering returns a new cursor object bound to the connection.
To be explicit:
for your open() example, the two examples are for all intents and purposes exactly the same. Both call open(), and if that does not raise an exception, you end up with a reference to that file object named myfile. In both cases the file object will be closed after the with statement is done. The name continues to exist after the with statement is done.
There is a difference, but it is mostly technical. For with open(...) as myfile:, the file object is created, has it's __enter__ method called and then myfile is bound. For the myfile = open(...) case, myfile is bound first, __enter__ called later.
For your with threading.Lock() as lock: example, using as lock will set lock to a True (locking always either succeeds or blocks indefinitely this way). This differs from the lock = threading.Lock() case, where lock is bound to the lock object.
Here's a good explanation. I'll paraphrase the key part:
The with statement could be thought of like this code:
set things up
try:
do something
finally:
tear things down
Here, “set things up” could be opening a file, or acquiring some sort of external resource, and “tear things down” would then be closing the file, or releasing or removing the resource. The try-finally construct guarantees that the “tear things down” part is always executed, even if the code that does the work doesn’t finish.
PyCharm warns about this code, saying the last return is unreachable:
def foo():
with open(...):
return 1
return 0
I expect that the second return would execute if open() failed. Who's right?
PyCharm is right. If open() fails, an exception is raised, and neither return is reached.
with does not somehow protect you from an exception in the expression that produces the context manager. The expression after with is expected to produce a context manager, at which point it's __exit__ method is stored and it's __enter__ method is called. The only outcomes here are that either the context manager is successfully produced and entered, or an exception is raised. At no point will with swallow an exception at this stage and silently skip the block.
I have a scenario with Tornado where I have a coroutine that is called from a non-coroutine or without yielding, yet I need to propagate the exception back.
Imagine the following methods:
#gen.coroutine
def create_exception(with_yield):
if with_yield:
yield exception_coroutine()
else:
exception_coroutine()
#gen.coroutine
def exception_coroutine():
raise RuntimeError('boom')
def no_coroutine_create_exception(with_yield):
if with_yield:
yield create_exception(with_yield)
else:
create_exception(with_yield)
Calling:
try:
# Throws exception
yield create_exception(True)
except Exception as e:
print(e)
will properly raise the exception. However, none of the following raise the exception :
try:
# none of these throw the exception at this level
yield create_exception(False)
no_coroutine_create_exception(True)
no_coroutine_create_exception(False)
except Exception as e:
print('This is never hit)
The latter are variants similar to what my problem is - I have code outside my control calling coroutines without using yield. In some cases, they are not coroutines themselves. Regardless of which scenario, it means that any exceptions they generate are swallowed until Tornado returns them as "future exception not received."
This is pretty contrary to Tornado's intent, their documentation basically states you need to do yield/coroutine through the entire stack in order for it to work as I'm desiring without hackery/trickery.
I can change the way the exception is raised (ie modify exception_coroutine). But I cannot change several of the intermediate methods.
Is there something I can do in order to force the exception to be raised throughout the Tornado stack, even if it is not properly yielded? Basically to properly raise the exception in all of the last three situations?
This is complicated because I cannot change the code that is causing this situation. I can only change exception_coroutine for example in the above.
What you're asking for is impossible in Python because the decision to yield or not is made by the calling function after the coroutine has finished. The coroutine must return without raising an exception so it can be yielded, and after that it is no longer possible for it to raise an exception into the caller's context in the event that the Future is not yielded.
The best you can do is detect the garbage collection of a Future, but this can't do anything but log (this is how the "future exception not retrieved" message works)
If you're curious why this isn't working, it's because no_coroutine_create_exception contains a yield statement. Therefore it's a generator function, and calling it does not execute its code, it only creates a generator object:
>>> no_coroutine_create_exception(True)
<generator object no_coroutine_create_exception at 0x101651678>
>>> no_coroutine_create_exception(False)
<generator object no_coroutine_create_exception at 0x1016516d0>
Neither of the calls above executes any Python code, it only creates generators that must be iterated.
You'd have to make a blocking function that starts the IOLoop and runs it until your coroutine finishes:
def exception_blocking():
return ioloop.IOLoop.current().run_sync(exception_coroutine)
exception_blocking()
(The IOLoop acts as a scheduler for multiple non-blocking tasks, and the gen.coroutine decorator is responsible for iterating the coroutine until completion.)
However, I think I'm likely answering your immediate question but merely enabling you to proceed down an unproductive path. You're almost certainly better off using async code or blocking code throughout instead of trying to mix them.
This question already has answers here:
What is the python "with" statement designed for?
(11 answers)
Closed 9 years ago.
I am new to Python. In one tutorial of connecting to mysql and fetching data, I saw the with statement. I read about it and it was something related to try-finally block. But I couldn't find a simpler explanation that I could understand.
with statements open a resource and guarantee that the resource will be closed when the with block completes, regardless of how the block completes. Consider a file:
with open('/etc/passwd', 'r') as f:
print f.readlines()
print "file is now closed!"
The file is guaranteed to be closed at the end of the block -- even if you have a return, even if you raise an exception.
In order for with to make this guarantee, the expression (open() in the example) must be a context manager. The good news is that many python expressions are context managers, but not all.
According to a tutorial I found, MySQLdb.connect() is, in fact, a context manager.
This code:
conn = MySQLdb.connect(...)
with conn:
cur = conn.cursor()
cur.do_this()
cur.do_that()
will commit or rollback the sequence of commands as a single transaction. This means that you don't have to worry so much about exceptions or other unusual code paths -- the transaction will be dealt with no matter how you leave the code block.
Fundamentally it's a object that demarcates a block of code with custom logic that is called on entrance and exit and can take arguments in it's construction. You can define a custom context manager with a class:
class ContextManager(object):
def __init__(self, args):
pass
def __enter__(self):
# Entrance logic here, called before entry of with block
pass
def __exit__(self, exception_type, exception_val, trace):
# Exit logic here, called at exit of with block
return True
The entrance then gets passed an instance of the contextmanager class and can reference anything created in the __init__ method (files, sockets, etc). The exit method also receives any exceptions raised in the internal block and the stack trace object or Nones if the logic completed without raising.
We could then use it like so:
with ContextManager(myarg):
# ... code here ...
This is useful for many things like managing resource lifetimes, freeing file descriptors, managing exceptions and even more complicated uses like building embedded DSLs.
An alternative (but equivalent) method of construction is to the contextlib decorator which uses a generator to separate the entrance and exit logic.
from contextlib import contextmanager
#contextmanager
def ContextManager(args):
# Entrance logic here
yield
# Exit logic here
Think of with as creating a "supervisor" (context manager) over a code block. The supervisor can even be given a name and referenced within the block. When the code block ends, either normally or via an exception, the supervisor is notified and it can take appropriate action based on what happened.