Evaluate and assign expression in or before with statement - python

If I am correct, with statement doesn't introduce a local scope for the with statement.
These are examples from Learning Python:
with open(r'C:\misc\data') as myfile:
for line in myfile:
print(line)
...more code here...
and
lock = threading.Lock() # After: import threading
with lock:
# critical section of code
...access shared resources...
Is the second example equivalent to the following rewritten in a way similar to the first example?
with threading.Lock() as lock:
# critical section of code
...access shared resources...
What are their differences?
Is the first example equivalent to the following rewritten in a way similar to the second example?
myfile = open(r'C:\misc\data')
with myfile:
for line in myfile:
print(line)
...more code here...
What are their differences?

When with enters a context, it calls a hook on the context manager object, called __enter__, and the return value of that hook can optionally be assigned to a name using as <name>. Many context managers return self from their __enter__ hook. If they do, then you can indeed take your pick between creating the context manager on a separate line or capturing the object with as.
Out of your two examples, only the file object returned from open() has an __enter__ hook that returns self. For threading.Lock(), __enter__ returns the same value as Lock.acquire(), so a boolean, not the lock object itself.
You'll need to look for explicit documentation that confirms this; this is not always that clear however. For Lock objects, the relevant section of the documentation states:
All of the objects provided by this module that have acquire() and release() methods can be used as context managers for a with statement. The acquire() method will be called when the block is entered, and release() will be called when the block is exited.
and for file objects, the IOBase documentation is rather on the vague side and you have to infer from the example that the file object is returned.
The main thing to take away is that returning self is not mandatory, nor is it always desired. Context managers are entirely free to return something else. For example, many database connection objects are context managers that let you manage the transaction (roll back or commit automatically, depending on whether or not there was an exception), where entering returns a new cursor object bound to the connection.
To be explicit:
for your open() example, the two examples are for all intents and purposes exactly the same. Both call open(), and if that does not raise an exception, you end up with a reference to that file object named myfile. In both cases the file object will be closed after the with statement is done. The name continues to exist after the with statement is done.
There is a difference, but it is mostly technical. For with open(...) as myfile:, the file object is created, has it's __enter__ method called and then myfile is bound. For the myfile = open(...) case, myfile is bound first, __enter__ called later.
For your with threading.Lock() as lock: example, using as lock will set lock to a True (locking always either succeeds or blocks indefinitely this way). This differs from the lock = threading.Lock() case, where lock is bound to the lock object.

Here's a good explanation. I'll paraphrase the key part:
The with statement could be thought of like this code:
set things up
try:
do something
finally:
tear things down
Here, “set things up” could be opening a file, or acquiring some sort of external resource, and “tear things down” would then be closing the file, or releasing or removing the resource. The try-finally construct guarantees that the “tear things down” part is always executed, even if the code that does the work doesn’t finish.

Related

What is the difference between 'open("file_path")' and 'with open("file_path")' in Python 3.8.10 and which one is most suitable to use? [duplicate]

This question already has an answer here:
What is the purpose of a context manager in python [duplicate]
(1 answer)
Closed 1 year ago.
I am studying Python and I found there are two types of file opening operations.
The first one is,
myreadfile = open("bear.txt", "r")
content = myreadfile.read()
second method is
with open("bear.txt") as file:
content = file.read()
I want to know is there any difference between these two methods and which one is most suitable to use.
They are context managers.
Explanation:
The with method is a context manager, if you use it for reading or writing I/O files, it will automatically close the file, don't need to add a line of file.close(), as mentioned in the docs:
Context managers allow you to allocate and release resources precisely when you want to. The most widely used example of context managers is the with statement. Suppose you have two related operations which you’d like to execute as a pair, with a block of code in between. Context managers allow you to do specifically that.
Examples:
There are examples in the docs, a regular with statement:
with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')
Is equivalent to:
file = open('some_file', 'w')
try:
file.write('Hola!')
finally:
file.close()
It says that:
While comparing it to the first example we can see that a lot of boilerplate code is eliminated just by using with. The main advantage of using a with statement is that it makes sure our file is closed without paying attention to how the nested block exits.
A brief introduction of the implementation:
As mentioned in the docs, the context manager could be implemented with a class:
At the very least a context manager has an __enter__ and __exit__ method defined.
As shown there, an example context manager implementation in a class would be something like:
class File(object):
def __init__(self, file_name, method):
self.file_obj = open(file_name, method)
def __enter__(self):
return self.file_obj
def __exit__(self, type, value, traceback):
self.file_obj.close()
with File('demo.txt', 'w') as opened_file:
opened_file.write('Hola!')
The code can behave like a context manager due to the magic methods, __enter__ and __exit__.
The first one does not close the file automatically. The second one does.
As far as I understand, the first one, you need to close the file after you're done with the operations while in the latter the file is automatically closed after execution of that indent block.
The second method is the recommended method. This with syntax is known as a context and will automatically close the file once the context is exited as well as if something goes wrong during the operation.
In the first case, the file is opened and read. It stays open afterwards.
In the second case, you use the file object as a so-called "context manager". Special methods get called on entering and leaving the with block: on leaving, it is closed. This is superior to the other, even superior to
myreadfile = open("bear.txt", "r")
content = myreadfile.read()
myreadfile.close()
because the close() line isn't reached when read() throws an exception.
It is more like
myreadfile = open("bear.txt", "r")
try:
content = myreadfile.read()
finally:
myreadfile.close()
but easier to use.

What is the benefit of using a context mananger with multiprocessing.Manager?

In the documentation, Manager is used with a context manager (i.e. with) like so:
from multiprocessing.managers import BaseManager
class MathsClass:
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
with MyManager() as manager:
maths = manager.Maths()
print(maths.add(4, 3)) # prints 7
print(maths.mul(7, 8)) # prints 56
But what is the benefit of this, with the exception of the namespace? For opening file streams, the benefit is quite obvious in that you don't have to manually .close() the connection, but what is it for Manager? If you don't use it in a context, what steps do you have to use to ensure that everything is closed properly?
In short, what is the benefit of using the above over something like:
manager = MyManager()
maths = manager.Maths()
print(maths.add(4, 3)) # prints 7
print(maths.mul(7, 8)) # prints 56
But what is the benefit of this (...)?
First, you get the primary benefit of almost any context managers. You have a well-defined lifetime for the resource. It is allocated and acquired when the with ...: block is opened. It is released when the blocks ends (either by reaching the end or because an exception is raised). It is still deallocated whenever the garbage collector gets around to it but this is of less concern since the external resource has already been released.
In the case of multiprocessing.Manager (which is a function that returns a SyncManager, even though Manager looks lot like a class), the resource is a "server" process that holds state and a number of worker processes that share that state.
what is [the benefit of using a context manager] for Manager?
If you don't use a context manager and you don't call shutdown on the manager then the "server" process will continue running until the SyncManager's __del__ is run. In some cases, this might happen soon after the code that created the SyncManager is done (for example, if it is created inside a short function and the function returns normally and you're using CPython then the reference counting system will probably quickly notice the object is dead and call its __del__). In other cases, it might take longer (if an exception is raised and holds on to a reference to the manager then it will be kept alive until that exception is dealt with). In some bad cases, it might never happen at all (if SyncManager ends up in a reference cycle then its __del__ will prevent the cycle collector from collecting it at all; or your process might crash before __del__ is called). In all these cases, you're giving up control of when the extra Python processes created by SyncManager are cleaned up. These processes may represent non-trivial resource usage on your system. In really bad cases, if you create SyncManager in a loop, you may end up creating many of these that live at the same time and could easily consume huge quantities of resources.
If you don't use it in a context, what steps do you have to use to ensure that everything is closed properly?
You have to implement the context manager protocol yourself, as you would for any context manager you used without with. It's tricky to do in pure-Python while still being correct. Something like:
manager = None
try:
manager = MyManager()
manager.__enter__()
# use it ...
except:
if manager is not None:
manager.__exit__(*exc_info())
else:
if manager is not None:
manager.__exit__(None, None, None)
start and shutdown are also aliases of __enter__ and __exit__, respectively.

Context Managers in Matlab: Invoking __enter__ in Matlab

I have a python package and I would like to use its classes and methods in Matlab. I know that this can be done directly since Matlab 2014b. I mean all you have to do is add py. in the beginning of your statements. So far so good, however, I couldn't figure out how to deal with context managers through MATLAB, which are invoked using the with statement. For instance, assume that we have the following class in a module called app.py,
class App(object):
def __init__(self, input):
self._input = input
self._is_open = False
def __enter__(self):
self._is_open = True
# many other stuff going after this but not relevant to this problem
In Matlab, I can call this as
app = py.app.App(input);
py.getattr(app, '_is_open')
ans =
logical
0
and I see an instance of App in my workspace. However, as expected only __init__ is invoked this way but not __enter__.
So, is there a way to invoke __enter__ from Matlab, as if we are calling it like with App(input) as app: in Python?
Note: I am using Python 3.5.1 and Matlab 2017b
I don't believe there is any way to invoke the __enter__ method of a Python class from MATLAB, but the __exit__ method might be implicitly called (I'll address this further below).
It's important to first consider the purpose of context managers (via the __enter__ and __exit__ methods), which is to provide a way to allocate and release resources in a scope-limited fashion, whether or not that scope is exited normally or via an error. MATLAB has a more limited means of "scoping": each function has its own workspace, and control structures like loops, conditional statements, etc. within that function all share that workspace (unlike many languages in which these control structures have their own sub-scopes).
When a workspace is exited in MATLAB, the variables it contains are cleared, but any resources that were allocated may still need to be released. This can be achieved with onCleanup objects. When they are cleared from memory, they invoke a given function for managing existing resources. An example would be opening and reading from a file:
function openFileSafely(fileName)
fid = fopen(fileName, 'r');
c = onCleanup(#() fclose(fid));
s = fread(fid);
...
end
Here, a file is opened and subsequently read from. An onCleanup object c is created that will close the file when c is cleared from memory upon exit from the function. If the file were simply closed with fclose(fid) at the end of the function, then an error exit from the function (such as during the course of reading data) would cause the file to still remain opened. Using an onCleanup object ensures that the file will be closed regardless of how the function exits. Here's an example of how this could be handled in Python:
with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')
Since MATLAB has a different means of "context management" than Python, this may explain why it's not possible to access the __enter__ method. I tried with a class I knew had one: the io.FileIO class. I first looked for help:
>> py.help('io.FileIO.__enter__')
Help on method_descriptor in io.FileIO:
io.FileIO.__enter__ = __enter__(...)
It finds some help text. It's not particularly helpful, but it's there. However, when I create an object and look at its methods list, neither __enter__ nor __exit__ (nor a clear equivalent) is there:
>> fio = py.io.FileIO('test.txt');
>> methods(fio)
Methods for class py._io.FileIO:
FileIO eq ge le read readinto seek truncate writelines
char fileno gt lt readable readline seekable writable
close flush isatty ne readall readlines tell write
Methods of py._io.FileIO inherited from handle.
Methods for class handle:
addlistener eq findprop gt le ne
delete findobj ge isvalid lt notify
I did notice something interesting when I cleared the fio object, though. While the fio object still existed (with the file open), I couldn't delete or move the file, as expected. However, after issuing the command clear fio, without first closing the file, I was able to interact with the file normally. This implies that the file was automatically closed. This makes me wonder if the __exit__ method might be getting implicitly invoked, but I have yet to determine it for certain.

Is calling `close()` redundant if file opened with `with` statement

Assume I have the following code:
with open('somefile.txt') as my_file:
# some processing
my_file.close()
Is my_file.close() above redundant?
Yes. Exiting the with block will close the file.
However, that is not necessarily true for objects that are not files. Normally, exiting the context should trigger an operation conceptually equivalent to "close", but in fact __exit__ can be overloaded to execute any code the object wishes.
Yes it is; Beside, it is not guarranty that your close() will always be executed. (for instance if an Exception occurs).
with open('somefile.txt') as my_file:
1/0 # raise Exception
my_file.close() # Your close() call is never going to be called
But the __exit__() function of the with statement is always executed because it follows the try...except...finally pattern.
The with statement is used to wrap the execution of a block with
methods defined by a context manager (see section With Statement
Context Managers). This allows common try...except...finally usage
patterns to be encapsulated for convenient reuse.
The context manager’s __exit__() method is invoked. If an exception
caused the suite to be exited, its type, value, and traceback are
passed as arguments to __exit__()
You can check that the file have been close right after the with statement using closed
>>> with open('somefile.txt') as f:
... pass
>>> f.closed
# True
Source for my answer:
Understanding Python's "with" statement
The with statement creates a runtime context.Python creates the stream object of the file and tells it that it is entering a runtime context. When the with code block is completed, Python tells the stream object that it is exiting the runtime context,and the stream object calls its own close() method.
yes, the with statement takes care of that
as you can see in the documentation:
The context manager’s __exit__() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to __exit__().
In the case of files, the __exit__() method will close the file

What does the 'with' statement do in python? [duplicate]

This question already has answers here:
What is the python "with" statement designed for?
(11 answers)
Closed 9 years ago.
I am new to Python. In one tutorial of connecting to mysql and fetching data, I saw the with statement. I read about it and it was something related to try-finally block. But I couldn't find a simpler explanation that I could understand.
with statements open a resource and guarantee that the resource will be closed when the with block completes, regardless of how the block completes. Consider a file:
with open('/etc/passwd', 'r') as f:
print f.readlines()
print "file is now closed!"
The file is guaranteed to be closed at the end of the block -- even if you have a return, even if you raise an exception.
In order for with to make this guarantee, the expression (open() in the example) must be a context manager. The good news is that many python expressions are context managers, but not all.
According to a tutorial I found, MySQLdb.connect() is, in fact, a context manager.
This code:
conn = MySQLdb.connect(...)
with conn:
cur = conn.cursor()
cur.do_this()
cur.do_that()
will commit or rollback the sequence of commands as a single transaction. This means that you don't have to worry so much about exceptions or other unusual code paths -- the transaction will be dealt with no matter how you leave the code block.
Fundamentally it's a object that demarcates a block of code with custom logic that is called on entrance and exit and can take arguments in it's construction. You can define a custom context manager with a class:
class ContextManager(object):
def __init__(self, args):
pass
def __enter__(self):
# Entrance logic here, called before entry of with block
pass
def __exit__(self, exception_type, exception_val, trace):
# Exit logic here, called at exit of with block
return True
The entrance then gets passed an instance of the contextmanager class and can reference anything created in the __init__ method (files, sockets, etc). The exit method also receives any exceptions raised in the internal block and the stack trace object or Nones if the logic completed without raising.
We could then use it like so:
with ContextManager(myarg):
# ... code here ...
This is useful for many things like managing resource lifetimes, freeing file descriptors, managing exceptions and even more complicated uses like building embedded DSLs.
An alternative (but equivalent) method of construction is to the contextlib decorator which uses a generator to separate the entrance and exit logic.
from contextlib import contextmanager
#contextmanager
def ContextManager(args):
# Entrance logic here
yield
# Exit logic here
Think of with as creating a "supervisor" (context manager) over a code block. The supervisor can even be given a name and referenced within the block. When the code block ends, either normally or via an exception, the supervisor is notified and it can take appropriate action based on what happened.

Categories

Resources