ExitStack within classes - python

I would like to understand why using the following snippet leads me to an error:
a) I want to use the following class to create a context manager, as outlined in the link attached below: for me it is very important to keep the "class PrintStop(ExitStack)" form, so please bear in mind when trying to solve this issue, that I already know there are other ways to use ExitStack(), but I am interested in this specific way of using it:
class PrintStop(ExitStack):
def __init__(self, verbose: bool = False):
super().__init__()
self.verbose = verbose
def __enter__(self):
super().__enter__()
if not self.verbose:
sys.stdout = self.enter_context(open(os.devnull, 'w'))
b) when trying to use the class in the more appropriate way, I get the desired effect to stop all the printing within the "with" block, but when trying to print again after that block I get an error:
with PrintStop(verbose=False):
print('this shouldn't be printed') <------ok till here
print('this should be printed again as it is outside the with block) <-----ERROR
c) the error I get is "ValueError: I/O operation on closed file": the reason I guess is the fact that exit method of ExitStack() is not automatically called once we exit the 'with' block, so, how may I change the class to fix this bug?
Here is a quick reference to a similar topic,
Pythonic way to compose context managers for objects owned by a class

ExitStack.__exit__ simply ensures that each context you enter has its __exit__ method called; it does not ensure that any changes you made (like assigning to sys.stdout) inside the corresponding __enter__ is undone.
Also, the purpose of an exit stack is to make it easy to enter contexts that require information not known when the with statement is introduced, or to create a variable number of contexts without having to enumerate them statically.
If you really want to use an exit stack, you'll need something like
class PrintStop(ExitStack):
def __init__(self, verbose: bool = False):
super().__init__()
self.verbose = verbose
def __enter__(self):
rv = super().__enter__()
if not self.verbose:
sys.stdout = self.enter_context(open(os.devnull, 'w'))
return rv
def __exit__(self):
sys.stdout = sys.__stdout__ # Restore the original
return super().__exit__()
Keep in mind that contextlib already provides a context manager for temporarily replacing standard output with a different file, appropriately named redirect_stdout.
with redirect_stdout(open(os.devnull, 'w')):
...
Using this as the basis for PrintStop makes use of composition, rather than inheritance.
from contextlib import redirect_stdout, nullcontext
class PrintStop:
def __init__(self, verbose: bool = False):
super().__init__()
if verbose:
self.cm = redirect_stdout(open(os.devnull, 'w'))
else:
self.cm = nullcontext()
def __enter__(self):
return self.cm.__enter__()
def __exit__(self):
return self.cm.__exit__()

Related

Returning "self"? what does it really do and when do we need to return self

I encountered a code which looks similar to this:
from contextlib import contextmanager, ContextDecorator
class makepara(ContextDecorator):
def __enter__(self):
print ("<p>")
return self
def __exit__(self, *args):
print ("</p>")
return False
#makepara()
def emit_data():
print (" here is HTML code")
emit_data()
I found related answer this
but when i change the above code to
from contextlib import contextmanager, ContextDecorator
class makepara(ContextDecorator):
def __enter__(self):
print ("<p>")
def __exit__(self, *args):
print ("</p>")
#makepara()
def emit_data():
print (" here is HTML code")
emit_data()
there is no change in the output, that makes me wonder what does return self actually does and how to be used?
You choose to return self (or some other object, but usually the context manager instance itself) so that a name can be bound with this syntax:
with makepara() as var:
...
The object returned by __enter__ will be bound to the name var within the context (and will, in fact, remain bound to var after exiting context).
If you wouldn't need any value bound after entering context, it is possible to omit an explicit return (the implicit return of None would be used in this case regardless) but there is no harm and no disadvantage in returning self anyway.
return self is not only useful in with statement, but also useful in many other situations.
For example, when you open a file using:
with open("file") as f:
....
Function open actually return an object which implements __enter__, and in its __enter__, it use return self to let you bind this instance to variable f, so that you can do f.read or something else after.
In other situations, for another example, if you want to chained call(Maybe data = a.connect().get("key").to_dict()). You need to add return self to connect and get.
But after all, return self is nothing more than returning a normal variable.

Using callbacks to run function using the current values in a class

I struggled to think of a good title so I'll just explain it here. I'm using Python in Maya, which has some event callback options, so you can do something like on save: run function. I have a user interface class, which I'd like it to update when certain events are triggered, which I can do, but I'm looking for a cleaner way of doing it.
Here is a basic example similar to what I have:
class test(object):
def __init__(self, x=0):
self.x = x
def run_this(self):
print self.x
def display(self):
print 'load user interface'
#Here's the main stuff that used to be just 'test().display()'
try:
callbacks = [callback1, callback2, ...]
except NameError:
pass
else:
for i in callbacks:
try:
OpenMaya.MEventMessage.removeCallback(i)
except RuntimeError:
pass
ui = test(5)
callback1 = OpenMaya.MEventMessage.addEventCallback('SomeEvent', ui.run_this)
callback2 = OpenMaya.MEventMessage.addEventCallback('SomeOtherEvent', ui.run_this)
callback3 = ......
ui.display()
The callback persists until Maya is restarted, but you can remove it using removeCallback if you pass it the value that is returned from addEventCallback. The way I have currently is just check if the variable is set before you set it, which is a lot more messy than the previous one line of test().display()
Would there be a way that I can neatly do it in the function? Something where it'd delete the old one if I ran the test class again or something similar?
There are two ways you might want to try this.
You can an have a persistent object which represents your callback manager, and allow it to hook and unhook itself.
import maya.api.OpenMaya as om
import maya.cmds as cmds
om.MEventMessage.getEventNames()
class CallbackHandler(object):
def __init__(self, cb, fn):
self.callback = cb
self.function = fn
self.id = None
def install(self):
if self.id:
print "callback is currently installed"
return False
self.id = om.MEventMessage.addEventCallback(self.callback, self.function)
return True
def uninstall(self):
if self.id:
om.MEventMessage.removeCallback(self.id)
self.id = None
return True
else:
print "callback not currently installed"
return False
def __del__(self):
self.uninstall()
def test_fn(arg):
print "callback fired 2", arg
cb = CallbackHandler('NameChanged', test_fn)
cb.install()
# callback is active
cb.uninstall()
# callback not active
cb.install()
# callback on again
del(cb) # or cb = None
# callback gone again
In this version you'd store the CallbackHandlers you create for as long as you want the callback to persist and then manually uninstall them or let them fall out of scope when you don't need them any more.
Another option would be to create your own object to represent the callbacks and then add or remove any functions you want it to trigger in your own code. This keeps the management entirely on your side instead of relying on the api, which could be good or bad depending on your needs. You'd have an Event() class which was callable (using __call__() and it would have a list of functions to fire then its' __call__() was invoked by Maya. There's an example of the kind of event handler object you'd want here

subclassing file objects (to extend open and close operations) in python 3

Suppose I want to extend the built-in file abstraction with extra operations at open and close time. In Python 2.7 this works:
class ExtFile(file):
def __init__(self, *args):
file.__init__(self, *args)
# extra stuff here
def close(self):
file.close(self)
# extra stuff here
Now I'm looking at updating the program to Python 3, in which open is a factory function that might return an instance of any of several different classes from the io module depending on how it's called. I could in principle subclass all of them, but that's tedious, and I'd have to reimplement the dispatching that open does. (In Python 3 the distinction between binary and text files matters rather more than it does in 2.x, and I need both.) These objects are going to be passed to library code that might do just about anything with them, so the idiom of making a "file-like" duck-typed class that wraps the return value of open and forwards necessary methods will be most verbose.
Can anyone suggest a 3.x approach that involves as little additional boilerplate as possible beyond the 2.x code shown?
You could just use a context manager instead. For example this one:
class SpecialFileOpener:
def __init__ (self, fileName, someOtherParameter):
self.f = open(fileName)
# do more stuff
print(someOtherParameter)
def __enter__ (self):
return self.f
def __exit__ (self, exc_type, exc_value, traceback):
self.f.close()
# do more stuff
print('Everything is over.')
Then you can use it like this:
>>> with SpecialFileOpener('C:\\test.txt', 'Hello world!') as f:
print(f.read())
Hello world!
foo bar
Everything is over.
Using a context block with with is preferred for file objects (and other resources) anyway.
tl;dr Use a context manager. See the bottom of this answer for important cautions about them.
Files got more complicated in Python 3. While there are some methods that can be used on normal user classes, those methods don't work with built-in classes. One way is to mix-in a desired class before instanciating it, but this requires knowing what the mix-in class should be first:
class MyFileType(???):
def __init__(...)
# stuff here
def close(self):
# more stuff here
Because there are so many types, and more could possibly be added in the future (unlikely, but possible), and we don't know for sure which will be returned until after the call to open, this method doesn't work.
Another method is to change both our custom type to have the returned file's ___bases__, and modifying the returned instance's __class__ attribute to our custom type:
class MyFileType:
def close(self):
# stuff here
some_file = open(path_to_file, '...') # ... = desired options
MyFileType.__bases__ = (some_file.__class__,) + MyFile.__bases__
but this yields
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __bases__ assignment: '_io.TextIOWrapper' deallocator differs from 'object'
Yet another method that could work with pure user classes is to create the custom file type on the fly, directly from the returned instance's class, and then update the returned instance's class:
some_file = open(path_to_file, '...') # ... = desired options
class MyFile(some_file.__class__):
def close(self):
super().close()
print("that's all, folks!")
some_file.__class__ = MyFile
but again:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __class__ assignment: only for heap types
So, it looks like the best method that will work at all in Python 3, and luckily will also work in Python 2 (useful if you want the same code base to work on both versions) is to have a custom context manager:
class Open(object):
def __init__(self, *args, **kwds):
# do custom stuff here
self.args = args
self.kwds = kwds
def __enter__(self):
# or do custom stuff here :)
self.file_obj = open(*self.args, **self.kwds)
# return actual file object so we don't have to worry
# about proxying
return self.file_obj
def __exit__(self, *args):
# and still more custom stuff here
self.file_obj.close()
# or here
and to use it:
with Open('some_file') as data:
# custom stuff just happened
for line in data:
print(line)
# data is now closed, and more custom stuff
# just happened
An important point to keep in mind: any unhandled exception in __init__ or __enter__ will prevent __exit__ from running, so in those two locations you still need to use the try/except and/or try/finally idioms to make sure you don't leak resources.
I had a similar problem, and a requirement of supporting both Python 2.x and 3.x. What I did was similar to the following (current full version):
class _file_obj(object):
"""Check if `f` is a file name and open the file in `mode`.
A context manager."""
def __init__(self, f, mode):
if isinstance(f, str):
self.file = open(f, mode)
else:
self.file = f
self.close_file = (self.file is not f)
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
if (not self.close_file):
return # do nothing
# clean up
exit = getattr(self.file, '__exit__', None)
if exit is not None:
return exit(*args, **kwargs)
else:
exit = getattr(self.file, 'close', None)
if exit is not None:
exit()
def __getattr__(self, attr):
return getattr(self.file, attr)
def __iter__(self):
return iter(self.file)
It passes all calls to the underlying file objects and can be initialized from an open file or from a filename. Also works as a context manager. Inspired by this answer.

Is a context manager right for this job?

The code pasted below does the following:
creates an import hook
creates a context manager which sets the meta_path and cleans on exit.
dumps all the imports done by a program passed in input in imports.log
Now I was wondering if using a context manager is a good idea in this case, because actually I don't have the standard try/finally flow, but just a set up and clean up.
Another thing — with this line:
with CollectorContext(cl, sys.argv, 'imports.log') as cc:
does cc become None? Shouldn't it be a CollectorContext object?
from __future__ import with_statement
import os
import sys
class CollectImports(object):
"""
Import hook, adds each import request to the loaded set and dumps
them to file
"""
def __init__(self):
self.loaded = set()
def __str__(self):
return str(self.loaded)
def dump_to_file(self, fname):
"""Dump the loaded set to file
"""
dumped_str = '\n'.join(x for x in self.loaded)
open(fname, 'w').write(dumped_str)
def find_module(self, module_name, package=None):
self.loaded.add(module_name)
class CollectorContext(object):
"""Sets the meta_path hook with the passed import hook when
entering and clean up when exiting
"""
def __init__(self, collector, argv, output_file):
self.collector = collector
self.argv = argv
self.output_file = output_file
def __enter__(self):
self.argv = self.argv[1:]
sys.meta_path.append(self.collector)
def __exit__(self, type, value, traceback):
# TODO: should assert that the variables are None, otherwise
# we are quitting with some exceptions
self.collector.dump_to_file(self.output_file)
sys.meta_path.remove(self.collector)
def main_context():
cl = CollectImports()
with CollectorContext(cl, sys.argv, 'imports.log') as cc:
progname = sys.argv[0]
code = compile(open(progname).read(), progname, 'exec')
exec(code)
if __name__ == '__main__':
sys.argv = sys.argv[1:]
main_context()
I think this concept is ok. As well, I don't see any reasons against having the clean-up stuff in a finally: clause, so the context manager fits perfectly.
Your cc is None, because you told it to be so.
If you don't want that, change your __enter__ method to return something else:
The value returned by this method is bound to the identifier in the as clause of with statements using this context manager.
def __enter__(self):
self.argv = self.argv[1:]
sys.meta_path.append(self.collector)
return self
# or
return self.collector
# or
return "I don't know what to return here"
and then
with CollectorContext(cl, sys.argv, 'imports.log') as cc:
print cc, repr(cc) # there you see what happens.
progname = sys.argv[0]
code = compile(open(progname).read(), progname, 'exec')
exec(code)
If you always want the cleanup to occur, you should use a context manager. I'm not sure where you use try..finally if you implement the context manager using the low-level special methods. If you use the #contextmanager decorator, you code the context manager in a "natural" way, so that's where you use try..finally instead of getting the exception as a parameter.
Also, cc will be the value you return from __enter__(). In your case, None. The way I understand the context manager design is that the return value is the "context". What the context manager does is set up and clean up contexts in which something else happens. E.g. a database connection will create transactions, and database operations happen in the scope of those transactions.
That said, the above is just there to provide maximum flexibility. There's nothing wrong with just creating a context (that manages itself) directly and returning self, or even not returning anything if you don't need to use the context value inside the with. Since you don't use cc anywhere, you could just do and not worry about the return value:
with CollectorContext(cl, sys.argv, 'imports.log'):
progname = sys.argv[0]
code = compile(open(progname).read(), progname, 'exec')
exec(code)
Thanks everyone now it works smoothly, I actually wanted with to return something because I wanted to encapsulate the "run" inside the context manager, so I get something as below.
Moreover, now I store the old sys.argv and restore it on exit, probably not fundamental but still a nice thing to do I think..
class CollectorContext(object):
"""Sets the meta_path hook with the passed import hook when
entering and clean up when exiting
"""
def __init__(self, collector, argv, output_file):
self.collector = collector
self.old_argv = argv[:]
self.output_file = output_file
self.progname = self.old_argv[1]
def __enter__(self):
sys.argv = self.old_argv[1:]
sys.meta_path.append(self.collector)
return self
def __exit__(self, type, value, traceback):
# TODO: should assert that the variables are None, otherwise
# we are quitting with some exceptions
self.collector.dump_to_file(self.output_file)
sys.meta_path.remove(self.collector)
sys.argv = self.old_argv[:]
def run(self):
code = compile(open(self.progname).read(), self.progname, 'exec')
exec(code)
def main_context():
cl = CollectImports()
with CollectorContext(cl, sys.argv, 'imports.log') as cc:
cc.run()

In python, is there a good idiom for using context managers in setup/teardown

I am finding that I am using plenty of context managers in Python. However, I have been testing a number of things using them, and I am often needing the following:
class MyTestCase(unittest.TestCase):
def testFirstThing(self):
with GetResource() as resource:
u = UnderTest(resource)
u.doStuff()
self.assertEqual(u.getSomething(), 'a value')
def testSecondThing(self):
with GetResource() as resource:
u = UnderTest(resource)
u.doOtherStuff()
self.assertEqual(u.getSomething(), 'a value')
When this gets to many tests, this is clearly going to get boring, so in the spirit of SPOT/DRY (single point of truth/dont repeat yourself), I'd want to refactor those bits into the test setUp() and tearDown() methods.
However, trying to do that has lead to this ugliness:
def setUp(self):
self._resource = GetSlot()
self._resource.__enter__()
def tearDown(self):
self._resource.__exit__(None, None, None)
There must be a better way to do this. Ideally, in the setUp()/tearDown() without repetitive bits for each test method (I can see how repeating a decorator on each method could do it).
Edit: Consider the undertest object to be internal, and the GetResource object to be a third party thing (which we aren't changing).
I've renamed GetSlot to GetResource here—this is more general than specific case—where context managers are the way which the object is intended to go into a locked state and out.
How about overriding unittest.TestCase.run() as illustrated below? This approach doesn't require calling any private methods or doing something to every method, which is what the questioner wanted.
from contextlib import contextmanager
import unittest
#contextmanager
def resource_manager():
yield 'foo'
class MyTest(unittest.TestCase):
def run(self, result=None):
with resource_manager() as resource:
self.resource = resource
super(MyTest, self).run(result)
def test(self):
self.assertEqual('foo', self.resource)
unittest.main()
This approach also allows passing the TestCase instance to the context manager, if you want to modify the TestCase instance there.
Manipulating context managers in situations where you don't want a with statement to clean things up if all your resource acquisitions succeed is one of the use cases that contextlib.ExitStack() is designed to handle.
For example (using addCleanup() rather than a custom tearDown() implementation):
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource = stack.enter_context(GetResource())
self.addCleanup(stack.pop_all().close)
That's the most robust approach, since it correctly handles acquisition of multiple resources:
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource1 = stack.enter_context(GetResource())
self._resource2 = stack.enter_context(GetOtherResource())
self.addCleanup(stack.pop_all().close)
Here, if GetOtherResource() fails, the first resource will be cleaned up immediately by the with statement, while if it succeeds, the pop_all() call will postpone the cleanup until the registered cleanup function runs.
If you know you're only ever going to have one resource to manage, you can skip the with statement:
def setUp(self):
stack = contextlib.ExitStack()
self._resource = stack.enter_context(GetResource())
self.addCleanup(stack.close)
However, that's a bit more error prone, since if you add more resources to the stack without first switching to the with statement based version, successfully allocated resources may not get cleaned up promptly if later resource acquisitions fail.
You can also write something comparable using a custom tearDown() implementation by saving a reference to the resource stack on the test case:
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource1 = stack.enter_context(GetResource())
self._resource2 = stack.enter_context(GetOtherResource())
self._resource_stack = stack.pop_all()
def tearDown(self):
self._resource_stack.close()
Alternatively, you can also define a custom cleanup function that accesses the resource via a closure reference, avoiding the need to store any extra state on the test case purely for cleanup purposes:
def setUp(self):
with contextlib.ExitStack() as stack:
resource = stack.enter_context(GetResource())
def cleanup():
if necessary:
one_last_chance_to_use(resource)
stack.pop_all().close()
self.addCleanup(cleanup)
pytest fixtures are very close to your idea/style, and allow for exactly what you want:
import pytest
from code.to.test import foo
#pytest.fixture(...)
def resource():
with your_context_manager as r:
yield r
def test_foo(resource):
assert foo(resource).bar() == 42
The problem with calling __enter__ and __exit__ as you did, is not that you have done so: they can be called outside of a with statement. The problem is that your code has no provision to call the object's __exit__ method properly if an exception occurs.
So, the way to do it is to have a decorator that will wrap the call to your original method in a withstatement. A short metaclass can apply the decorator transparently to all methods named test* in the class -
# -*- coding: utf-8 -*-
from functools import wraps
import unittest
def setup_context(method):
# the 'wraps' decorator preserves the original function name
# otherwise unittest would not call it, as its name
# would not start with 'test'
#wraps(method)
def test_wrapper(self, *args, **kw):
with GetSlot() as slot:
self._slot = slot
result = method(self, *args, **kw)
delattr(self, "_slot")
return result
return test_wrapper
class MetaContext(type):
def __new__(mcs, name, bases, dct):
for key, value in dct.items():
if key.startswith("test"):
dct[key] = setup_context(value)
return type.__new__(mcs, name, bases, dct)
class GetSlot(object):
def __enter__(self):
return self
def __exit__(self, *args, **kw):
print "exiting object"
def doStuff(self):
print "doing stuff"
def doOtherStuff(self):
raise ValueError
def getSomething(self):
return "a value"
def UnderTest(*args):
return args[0]
class MyTestCase(unittest.TestCase):
__metaclass__ = MetaContext
def testFirstThing(self):
u = UnderTest(self._slot)
u.doStuff()
self.assertEqual(u.getSomething(), 'a value')
def testSecondThing(self):
u = UnderTest(self._slot)
u.doOtherStuff()
self.assertEqual(u.getSomething(), 'a value')
unittest.main()
(I also included mock implementations of "GetSlot" and the methods and functions in your example so that I myself could test the decorator and metaclass I am suggesting on this answer)
I'd argue you should separate your test of the context manager from your test of the Slot class. You could even use a mock object simulating the initialize/finalize interface of slot to test the context manager object, and then test your slot object separately.
from unittest import TestCase, main
class MockSlot(object):
initialized = False
ok_called = False
error_called = False
def initialize(self):
self.initialized = True
def finalize_ok(self):
self.ok_called = True
def finalize_error(self):
self.error_called = True
class GetSlot(object):
def __init__(self, slot_factory=MockSlot):
self.slot_factory = slot_factory
def __enter__(self):
s = self.s = self.slot_factory()
s.initialize()
return s
def __exit__(self, type, value, traceback):
if type is None:
self.s.finalize_ok()
else:
self.s.finalize_error()
class TestContextManager(TestCase):
def test_getslot_calls_initialize(self):
g = GetSlot()
with g as slot:
pass
self.assertTrue(g.s.initialized)
def test_getslot_calls_finalize_ok_if_operation_successful(self):
g = GetSlot()
with g as slot:
pass
self.assertTrue(g.s.ok_called)
def test_getslot_calls_finalize_error_if_operation_unsuccessful(self):
g = GetSlot()
try:
with g as slot:
raise ValueError
except:
pass
self.assertTrue(g.s.error_called)
if __name__ == "__main__":
main()
This makes code simpler, prevents concern mixing and allows you to reuse the context manager without having to code it in many places.

Categories

Resources