I often see code that uses self to manage a context. For example
with self:
self.x = 4
self.y = 6
What's going on here? What does using self as a context allow?
Code that uses with self: suggests that whatever class you're using provides __enter__ and __exit__ methods. These methods create context. They can be convenient for setup / teardown in testing, etc.
What's going on here? What does using self as a context allow?
As long as the class has implemented the necessary "hooks" that a context manager should, Python allows it to be used like a normal context manager. Here is an excerpt from the docs which helps clear things up here:
Python’s with statement supports the concept of a runtime context defined by a context manager. This is implemented using a pair of methods that allow user-defined classes to define a runtime context that is entered before the statement body is executed and exited when the statement ends:
contextmanager.__enter__()
Enter the runtime context and return either this object or another object related to the runtime context. The value returned by this method is bound to the identifier in the as clause of with statements using this context manager.
[...]
contextmanager.__exit__(exc_type, exc_val, exc_tb)
Exit the runtime context and return a Boolean flag indicating if any exception that occurred should be suppressed. If an exception occurred while executing the body of the with statement, the arguments contain the exception type, value and traceback information. Otherwise, all three arguments are None.
[...]
As stated above, when you implement the necessary __enter__ and __exit__ magic methods for your class, Python allows you to treat it as a context manager.
If self is a context manager (i.e. has __enter__ and __exit__ methods) this will simply invoke that functionality, the same as it would if the instance were used in a with block outside the class.
There's nothing special happening here. self behaves the same way in a with block that anything else would. It calls __enter__ when you enter the scope and __exit__ when you leave the scope through any means. I can't imagine what using self here would accomplish, but if you can come up with some examples of where you've seen that, we might be able to provide a better answer.
Related
I am having a deeper look at the Session class inside the Sqlalchemy library in sqlalchemy.orm.session.py (link here) and i see this block inside the Session class at line 1170:
#util.contextmanager
def _maker_context_manager(self):
with self:
with self.begin():
yield self
I don't understand the syntax and what it does. Why is there a with self: at the start? Can we use with and any class? Can someone please explain this and how it is useful in the context of sqlalchemy session?
with self: points to __enter__ and __exit__ methods. These methods act as context managers. Here's an excerpt from Python's doc on context managers:
Python’s with statement supports the concept of a runtime context defined by a context manager. This is implemented using a pair of methods that allow user-defined classes to define a runtime context that is entered before the statement body is executed and exited when the statement ends:
contextmanager.__enter__(): Enter the runtime context and return either this object or another object related to the runtime context [...]
contextmanager.__exit__(exc_type, exc_val, exc_tb): Exit the runtime
context [...].
From Session (here):
def __enter__(self: _S) -> _S:
return self
def __exit__(self, type_: Any, value: Any, traceback: Any) -> None:
self.close()
Here, __enter__ returns the instance of the class. And __exit__ helps with graceful termination (closing the session when you exit the scope). Acting as internal context managers.
From Learning Python:
The basic format of the with statement looks like this, with an
optional part in square brackets here:
with expression [as variable]:
with-block
The expression here is assumed to return an object that supports the
context management protocol (more on this protocol in a moment).
This object may also return a value that will be assigned to the name variable if the optional as clause is present.
Note that the variable is not necessarily assigned the result of
the expression; the result of the expression is the object that
supports the context protocol, and the variable may be assigned
something else intended to be used inside the statement.
expression is evaluated to a context manager object.
What is assigned to variable? The quote only says that it is not a context manager object.
Does the assignment to variable call some method of a context manager class to produce the actual value assigned to variable?
Thanks.
Whatever is returned from __enter__. From the documentation on the __enter__ method of context managers:
contextmanager.__enter__()
Enter the runtime context and return either this object or another object related to the runtime context. The value returned by this method is bound to the identifier in the as clause of with statements using this context manager.
(Emphasis mine)
The result of calling __enter__ could very well be a context manager, nothing in the specification forbids this. It could of course be another object related to the runtime context, as the docs state.
Objects that return themselves from __enter__ can be used again and again as context managers. file objects, for example:
with open('test_file') as f1: # file.__enter__ returns self
with f1 as f2: # use it again, get __self__ back
print("Super context managing")
with f2 as f3, f1 as f4: # getting weird.
print("This can go on since f1.__enter__ returns f1")
print("f1.__exit__ has been called here, though :)")
print("f1 closed: {}".format(f1.closed))
Not that the previous made much sense but just to make the point clear.
Your object can function as a context manager if it provides both __enter__ and __exit__. The object returned by __enter__ is bound to the object you specify in the as part of the with statement:
In [1]: class Foo:
...: def __enter__(self):
...: return 'hello'
...: def __exit__(self, *args):
...: pass
...:
In [2]: with Foo() as a:
...: print(a)
...:
hello
I have a class that needs to run a TensorFlow session for each instance of the class, as long as that instance exists.
TensorFlow sessions use context managers, but I don't want to force anyone who uses my class to put my class into a context manager.
Is there any way to auto-close the session once the instance is no longer in use without using a context manager?
Can I just put in an __exit__ method and not an __enter__ method and start the session without the context manager and just close the session in the exit?
Is there any way to auto-close the session once the instance is no longer in use without using a context manager?
Not really, how would an object figure out when it’s no longer being used? If there was a safe way to do this, there wouldn’t be a need for context managers in the first place.
So you have to use context managers and the with statement to get this kind of feedback. But just because you have to use context managers, that does not mean that you actually need to have some separate “thing” you open. You can return anything in the __enter__ method, including the current object.
So the simplest context manager implementation that closes itself when the context is closed looks like this:
class MyClass:
def __enter__ (self):
return self
def __exit__ (self, *exc):
self.close()
def close (self):
# actually close the object
In fact, this pattern is so common, that there is a built-in recipe for this context manager: contextlib.closing. Using that, you do not actually need to modify your class at all, you can just wrap it in a closing() call and have it call close when the context is exited:
with closing(my_object):
my_object.do_something()
# my_object.close() is automatically called
You must define an __enter__ method, but you can just define it as:
def __enter__(self):
return self
and have the session defined in the init. Then, define __exit__ like so:
def __exit__(self, *exc):
self.close()
Then, define a close method that closes whatever resources were opened in __init__. (In my case, it's a TensorFlow session.)
This way, if the user decides to use the context manager, it will close it for them, and if they don't, they'll have to close it on their own.
If I have a class that wraps a resource, e.g., an sqlite database connection or a file, is there a way I can use the with statement to close the resource when my object goes out of scope or is gcollected?
To clarify what I mean, I want to avoid this:
class x:
def __init__(self):
# open resource
def close(self): # or __del__, even worst
# close resource
but make it in such a way that the resource is always freed as in
with open('foo') as f:
# use resource
You need to provide __enter__ and __exit__ methods. See PEP 343.
This PEP adds a new statement "with" to the Python language to make it
possible to factor out standard uses of try/finally statements.
In this PEP, context managers provide __enter__() and __exit__()
methods that are invoked on entry to and exit from the body of the
with statement.
Use contextlib.closing:
with contextlib.closing(thing) as thing:
do_stuff_with(thing)
# Thing is closed now.
You can always put any cleanup code you need into a class's __del__ method:
class x:
def __init__(self):
self.thing = get_thing()
def __del__(self):
self.thing.close()
But you shouldn't.
This is a bad idea, for a few reasons. If you're using CPython, having custom __del__ methods means the GC can't break reference cycles. If you're using most other Python implementations, __del__ methods aren't called at predictable times.
This is why you usually put cleanup in explicit close methods. That's the best you can do within the class itself. It's always up to the user of your class to make sure the close method gets called, not the class itself.
So, there's no way you can use a with statement, or anything equivalent, inside your class. But you can make it easier for users of your class to use a with statement, by making your class into a context manager, as described in roippi's answer, or just by suggesting they use contextlib.closing in your documentation.
class Foo(object):
pass
foo = Foo()
def bar(self):
print 'bar'
Foo.bar = bar
foo.bar() #bar
Coming from JavaScript, if a "class" prototype was augmented with a certain attribute. It is known that all instances of that "class" would have that attribute in its prototype chain, hence no modifications has to be done on any of its instances or "sub-classes".
In that sense, how can a Class-based language like Python achieve Monkey patching?
The real question is, how can it not? In Python, classes are first-class objects in their own right. Attribute access on instances of a class is resolved by looking up attributes on the instance, and then the class, and then the parent classes (in the method resolution order.) These lookups are all done at runtime (as is everything in Python.) If you add an attribute to a class after you create an instance, the instance will still "see" the new attribute, simply because nothing prevents it.
In other words, it works because Python doesn't cache attributes (unless your code does), because it doesn't use negative caching or shadowclasses or any of the optimization techniques that would inhibit it (or, when Python implementations do, they take into account the class might change) and because everything is runtime.
I just read through a bunch of documentation, and as far as I can tell, the whole story of how foo.bar is resolved, is as follows:
Can we find foo.__getattribute__ by the following process? If so, use the result of foo.__getattribute__('bar').
(Looking up __getattribute__ will not cause infinite recursion, but the implementation of it might.)
(In reality, we will always find __getattribute__ in new-style objects, as a default implementation is provided in object - but that implementation is of the following process. ;) )
(If we define a __getattribute__ method in Foo, and access foo.__getattribute__, foo.__getattribute__('__getattribute__') will be called! But this does not imply infinite recursion - if you are careful ;) )
Is bar a "special" name for an attribute provided by the Python runtime (e.g. __dict__, __class__, __bases__, __mro__)? If so, use that. (As far as I can tell, __getattribute__ falls into this category, which avoids infinite recursion.)
Is bar in the foo.__dict__ dict? If so, use foo.__dict__['bar'].
Does foo.__mro__ exist (i.e., is foo actually a class)? If so,
For each base-class base in foo.__mro__[1:]:
(Note that the first one will be foo itself, which we already searched.)
Is bar in base.__dict__? If so:
Let x be base.__dict__['bar'].
Can we find (again, recursively, but it won't cause a problem) x.__get__?
If so, use x.__get__(foo, foo.__class__).
(Note that the function bar is, itself, an object, and the Python compiler automatically gives functions a __get__ attribute which is designed to be used this way.)
Otherwise, use x.
For each base-class base of foo.__class__.__mro__:
(Note that this recursion is not a problem: those attributes should always exist, and fall into the "provided by the Python runtime" case. foo.__class__.__mro__[0] will always be foo.__class__, i.e. Foo in our example.)
(Note that we do this even if foo.__mro__ exists. This is because classes have a class, too: its name is type, and it provides, among other things, the method used to calculate __mro__ attributes in the first place.)
Is bar in base.__dict__? If so:
Let x be base.__dict__['bar'].
Can we find (again, recursively, but it won't cause a problem) x.__get__?
If so, use x.__get__(foo, foo.__class__).
(Note that the function bar is, itself, an object, and the Python compiler automatically gives functions a __get__ attribute which is designed to be used this way.)
Otherwise, use x.
If we still haven't found something to use: can we find foo.__getattr__ by the preceding process? If so, use the result of foo.__getattr__('bar').
If everything failed, raise AttributeError.
bar.__get__ is not really a function - it's a "method-wrapper" - but you can imagine it being implemented vaguely like this:
# Somewhere in the Python internals
class __method_wrapper(object):
def __init__(self, func):
self.func = func
def __call__(self, obj, cls):
return lambda *args, **kwargs: func(obj, *args, **kwargs)
# Except it actually returns a "bound method" object
# that uses cls for its __repr__
# and there is a __repr__ for the method_wrapper that I *think*
# uses the hashcode of the underlying function, rather than of itself,
# but I'm not sure.
# Automatically done after compiling bar
bar.__get__ = __method_wrapper(bar)
The "binding" that happens within the __get__ automatically attached to bar (called a descriptor), by the way, is more or less the reason why you have to specify self parameters explicitly for Python methods. In Javascript, this itself is magical; in Python, it is merely the process of binding things to self that is magical. ;)
And yes, you can explicitly set a __get__ method on your own objects and have it do special things when you set a class attribute to an instance of the object and then access it from an instance of that other class. Python is extremely reflective. :) But if you want to learn how to do that, and get a really full understanding of the situation, you have a lot of reading to do. ;)