access the current unittest.TestCase instance - python

I'm writing some testing utility library and may want to do the following
def assertSomething(test_case, ...):
test_case.assertEqual()
Is there a way to skip passing test_case around? e.g.
def assertSomething(...):
get_current_unittest_case().assertEqual()

AssertionError
If you just want to do some checks and fail with custom message,
raising AssertionError (via raise or assert) is the way to
go. By default, TestCase.failureException
is AssertionError, and
fail
(which internally is used by convenience methods of unittest.TestCase)
just raises it.
test_things.py:
import unittest
def check_is_zero(number):
assert number == 0, "{!r} is not 0".format(number)
def check_is_one(number):
if number != 1:
raise AssertionError("{!r} is not 1".format(number))
class NumbersTest(unittest.TestCase):
def test_one(self):
check_is_one(1)
def test_zero(self):
check_is_zero(0)
TestCase mixin
An easy and relatively readable way to add new assertions is to
make a “mixin class” that test cases will subclass. Sometimes it
is good enough.
testutils.py that contains mixin:
class ArithmeticsMixin:
def check_is_one(self, number):
self.assertEqual(number, 1)
test_thing.py, actual tests:
import unittest
import testutils
class NumbersTest(unittest.TestCase, testutils.ArithmeticsMixin):
def test_one(self):
self.check_is_one(1)
If there will be many mixin classes, use of base one may help:
import unittest
import testutils
class BaseTestCase(unittest.TestCase, testutils.ArithmeticsMixin):
"""Test case with additional methods for testing arithmetics."""
class NumbersTest(BaseTestCase):
def test_one(self):
self.check_is_one(1)
Thread local and TestCase subclass
Less readable is use of special base class,
thread local
(like global, but is aware of threads) and getter function.
testutils.py:
import unittest
import threading
_case_data = threading.local() # thread local
class ImproperlyUsed(Exception):
"""Raised if get_case is called without cooperation of BaseTestCase."""
def get_case(): # getter function
case = getattr(_case_data, "case", None)
if case is None:
raise ImproperlyUsed
return case
class BaseTestCase(unittest.TestCase): # base class
def run(self, *args, **kwargs):
_case_data.case = self
super().run(*args, **kwargs)
_case_data.case = None
def check_is_one(number):
case = get_case()
get_case().assertEqual(number, 1)
When test case is running, self (test case instance) is saved as
_case_data.case, so later inside check_is_one (or any other function
that is called from inside of test method and wants to access self)
get_case will be able to get reference to test case instance. Note
that after running _case_data.case is set to None, and if
get_case is called after that, ImproperlyUsed will be raised.
test_thing.py:
import testutils
def check_is_zero(number): # example of out-of-testutils function
testutils.get_case().assertEqual(number, 0)
class NumbersTest(testutils.BaseTestCase):
def test_one(self):
testutils.check_is_one(1)
def test_zero(self):
check_is_zero(0)
s̻̙ỵ̞̜͉͎̩s.̠͚̱̹̦̩͓_̢̬ge̡̯̳̖t̞͚̖̳f̜̪̩̪r̙̖͚̖̼͉̹a͏ṃ̡̹e̞̝̱̠̙
Finally, sys._getframe.
Let’s just h́o̞̓̐p̆̇̊̇e you don’t need it, because it is part of CPython,
not Python.
testutils.py:
import sys
import unittest
class ImproperlyUsed(Exception):
"""Raised if get_case can't get "self" TestCase instance."""
def get_case():
case = sys._getframe().f_back.f_back.f_locals.get("self")
if not isinstance(case, unittest.TestCase):
raise ImproperlyUsed
return case
def check_is_one(number):
case = get_case()
get_case().assertEqual(number, 1)
sys._getframe returns frame from the top of call stack, then few
frames below f_locals is taken, and there is self: instance of
unittest.TestCase. Like in previous implementation option, there is
sanity check, but here it is done with isinstance.
test_things.py:
import unittest
import testutils
def check_is_zero(number): # example of out-of-testutils function
testutils.get_case().assertEqual(number, 0)
class NumbersTest(unittest.TestCase):
def test_one(self):
testutils.check_is_one(1)
def test_zero(self):
check_is_zero(0)
If you just want to provide assertEqual for some new type,
take a look at addTypeEqualityFunc.

Related

How to verify when an unknown object created by the code under test was called as expected (pytest) (unittest)

I have some code that creates instances from a list of classes that is passed to it. This cannot change as the list of classes passed to it has been designed to be dynamic and chosen at runtime through configuration files). Initialising those classes must be done by the code under test as it depends on factors only the code under test knows how to control (i.e. it will set specific initialisation args). I've tested the code quite extensively through running it and manually trawling through reams of output. Obviously I'm at the point where I need to add some proper unittests as I've proven my concept to myself. The following example demonstrates what I am trying to test:
I would like to test the run method of the Foo class defined below:
# foo.py
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run():
for thing in self._stuff:
stuff = stuff()
stuff.run()
Where one (or more) files would contain the class definitions for stuff to run, for example:
# classes.py
class Abc:
def run(self):
print("Abc.run()", self)
class Ced:
def run(self):
print("Ced.run()", self)
class Def:
def run(self):
print("Def.run()", self)
And finally, an example of how it would tie together:
>>> from foo import Foo
>>> from classes import Abc, Ced, Def
>>> f = Foo([Abc, Ced, Def])
>>> f.run()
Abc.run() <__main__.Abc object at 0x7f7469f9f9a0>
Ced.run() <__main__.Abc object at 0x7f7469f9f9a1>
Def.run() <__main__.Abc object at 0x7f7469f9f9a2>
Where the list of stuff to run defines the object classes (NOT instances), as the instances only have a short lifespan; they're created by Foo.run() and die when (or rather, sometime soon after) the function completes. However, I'm finding it very tricky to come up with a clear method to test this code.
I want to prove that the run method of each of the classes in the list of stuff to run was called. However, from the test, I do not have visibility on the Abc instance which the run method creates, therefore, how can it be verified? I can't patch the import as the code under test does not explicitly import the class (after all, it doesn't care what class it is). For example:
# test.py
from foo import Foo
class FakeStuff:
def run(self):
self.run_called = True
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert <SOMETHING>.run_called, "FakeStuff.run() was not called"
It seems that you correctly realise that you can pass anything into Foo(), so you should be able to log something in FakeStuff.run():
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run(self):
for thing in self._stuff:
stuff = thing()
stuff.run()
class FakeStuff:
run_called = 0
def run(self):
FakeStuff.run_called += 1
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff, FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert FakeStuff.run_called == 2, "FakeStuff.run() was not called"
Note that I have modified your original Foo to what I think you meant. Please correct me if I'm wrong.

Patching local class reference

Like many people, I'm having issues with mock patching and getting the path right. Specifically, my code references another class in the same file and I'm having trouble patching that reference.
I have the following python file, package/engine/dataflows/flow.py:
class Flow:
def run(self, type):
if type == 'A':
method1()
elif type == 'B':
method2()
else:
backfill = Backfill()
backfill.run()
class Backfill(Flow):
def run(self):
...
And a test file package/tests/engine/dataflows/test_Flow.py
import unittest
from unittest.mock import Mock, patch
from engine.dataflows.flow import Flow
class MockFlow(Flow):
...
class TestFlowRun(unittest.TestCase):
def setUp(self):
self.flow = MockFlow()
def test_run_type_c(self):
with patch('engine.dataflows.flow.Backfill') as mock_backfill:
self.flow.run(type='C')
assert mock_backfill.run.call_count == 1
The patch works in that it doesn't throw an error when run with pytest, but the assertion is failing. I assume that is because the local reference to the Backfill class has essentially already been imported when MockFlow was initialized, but I have been unable to come up with a patching path that handles this.
The contents of flow.py include the Flow base class and a couple of child classes that implement different data flow patterns. They're co-located in the same file for ease of understanding and common dependencies.
The problem is that you are checking the run() function of a class, not an instantiation of that class. The mocked Backfill class will return an instance of the class via its constructor/init. That is the mock / object you will want to check.
with patch('engine.dataflows.flow.Backfill') as mock_backfill:
mocked_backfill_instance = mock_backfill.return_value
self.flow.run(type='C')
assert mocked_backfill_instance.run.call_count == 1

Test instance of class that mocked method is called from in Python

I am mocking out a method of a class and want to test the instance of the class that the method was called from to test that the creation part of my function works as expected.
In my particular case do_stuff tries to write bar_instance to an Excel File and I don't want that to happen i.e.
def create_instance(*args):
return Bar(*args)
class Bar():
def __init__(self, *args):
self.args = args
def do_stuff(self):
pass
def foo(*args):
bar_instance = create_instance(*args)
bar_instance.do_stuff()
Then in a testing file
from unittest import TestCase
from unittest.mock import patch
from path.to.file import foo
class TestFoo(TestCase):
#patch('path.to.file.Bar.do_stuff')
def test_foo(self, mock_do_stuff):
test_args = [1]
_ = foo(*test_args)
# Test here the instance of `Bar` that `mock_do_stuff` was called from
# Something like
actual_args = list(bar_instance.args)
self.assertEqual(test_args, actual_args)
I put a break in the test function after foo(*test_args) is run but can't see any way from the mocked method of accessing the instance of Bar it was called from and am a bit stuck. I don't want to mock out Bar further up the code as I want to make sure the correct instance of Bar is being created.
In your code example, there are three things that might need testing: function create_instance, class Bar and function foo. I understand your test code such that you want to ensure that function foo calls do_stuff on the instance returned by create_instance.
Since the original create_instance function has control over the created instance, a solution of your problem is to mock create_instance such that your test gains control of the object that is handed over to foo:
import unittest
from unittest import TestCase
from unittest.mock import patch, MagicMock
from SO_60624698 import foo
class TestFoo(TestCase):
#patch('SO_60624698.create_instance')
def test_foo_calls_do_stuff_on_proper_instance (
self, create_instance_mock ):
# Setup
Bar_mock = MagicMock()
create_instance_mock.return_value = Bar_mock
# Exercise
foo(1, 2, 3) # args are irrelevant
# Verify
Bar_mock.do_stuff.assert_called()
if __name__ == '__main__':
unittest.main()
In addition, you might also want to test if foo passes the arguments correctly to create_instance. This could be implemented as a separate test:
...
#patch('SO_60624698.create_instance')
def test_foo_passes_arguments_to_create_instance (
self, create_instance_mock ):
# Setup
create_instance_mock.return_value = MagicMock()
# Exercise
foo(1, 22, 333)
# Verify
create_instance_mock.assert_called_with(1, 22, 333)
And, certainly, to complete the whole test of the object generation, you could test create_instance directly, by calling it and checking on the returned instance of Bar if it has used its arguments correctly for the construction of the Bar instance.
As patch returns an instance of Mock (or actually MagicMock, but it inherits the relevant methods from its base - Mock), you have the assert_called_with method available, which should do the trick.
Note that this method is sensitive to args/kwargs - you have to assert the exact same call.
Another note: it might be a better practice to use patch.object instead of patch here

Unable to mock class methods using unitest in python

module a.ClassA:
class ClassA():
def __init__(self,callingString):
print callingString
def functionInClassA(self,val):
return val
module b.ClassB:
from a.ClassA import ClassA
class ClassB():
def __init__(self,val):
self.value=val
def functionInsideClassB(self):
obj=ClassA("Calling From Class B")
value=obj.functionInClassA(self.value)
Python unittest class
import unittest
from b.ClassB import ClassB
from mock import patch, Mock, PropertyMock,mock
class Test(unittest.TestCase):
#patch('b.ClassB.ClassA',autospec = True)
def _test_sample(self,classAmock):
dummyMock=Mock()
dummyMock.functionInClassA.return_value="mocking functionInClassA"
classAmock.return_value=dummyMock
obj=ClassB("dummy_val")
obj.functionInsideClassB()
assert dummyMock.functionInClassA.assert_called_once_with("dummy_val")
The assertion fails. Where exactly am I going wrong?
You assigned to return_value twice:
classAmock.return_value=dummyMock
classAmock.return_value=Mock()
That second assignment undoes your work setting up dummyMock entirely; the new Mock instance has no functionInClassA attribute set up.
You don't need to create new mock objects; just use the default return_value attribute value:
class Test(unittest.TestCase):
#patch('b.ClassB.ClassA', autospec=True)
def test_sample(self, classAmock):
instance = classAmock.return_value
instance.functionInClassA.return_value = "mocking functionInClassA"
obj = ClassB("dummy_val")
obj.functionInsideClassB()
instance.functionInClassA.assert_called_once_with("dummy_val")
You do not need to assert the return value of assert_called_once_with() as that is always None (making your extra assert fail, always). Leave the assertion to the assert_called_once_with() method, it'll raise as needed.

In python, is there a good idiom for using context managers in setup/teardown

I am finding that I am using plenty of context managers in Python. However, I have been testing a number of things using them, and I am often needing the following:
class MyTestCase(unittest.TestCase):
def testFirstThing(self):
with GetResource() as resource:
u = UnderTest(resource)
u.doStuff()
self.assertEqual(u.getSomething(), 'a value')
def testSecondThing(self):
with GetResource() as resource:
u = UnderTest(resource)
u.doOtherStuff()
self.assertEqual(u.getSomething(), 'a value')
When this gets to many tests, this is clearly going to get boring, so in the spirit of SPOT/DRY (single point of truth/dont repeat yourself), I'd want to refactor those bits into the test setUp() and tearDown() methods.
However, trying to do that has lead to this ugliness:
def setUp(self):
self._resource = GetSlot()
self._resource.__enter__()
def tearDown(self):
self._resource.__exit__(None, None, None)
There must be a better way to do this. Ideally, in the setUp()/tearDown() without repetitive bits for each test method (I can see how repeating a decorator on each method could do it).
Edit: Consider the undertest object to be internal, and the GetResource object to be a third party thing (which we aren't changing).
I've renamed GetSlot to GetResource here—this is more general than specific case—where context managers are the way which the object is intended to go into a locked state and out.
How about overriding unittest.TestCase.run() as illustrated below? This approach doesn't require calling any private methods or doing something to every method, which is what the questioner wanted.
from contextlib import contextmanager
import unittest
#contextmanager
def resource_manager():
yield 'foo'
class MyTest(unittest.TestCase):
def run(self, result=None):
with resource_manager() as resource:
self.resource = resource
super(MyTest, self).run(result)
def test(self):
self.assertEqual('foo', self.resource)
unittest.main()
This approach also allows passing the TestCase instance to the context manager, if you want to modify the TestCase instance there.
Manipulating context managers in situations where you don't want a with statement to clean things up if all your resource acquisitions succeed is one of the use cases that contextlib.ExitStack() is designed to handle.
For example (using addCleanup() rather than a custom tearDown() implementation):
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource = stack.enter_context(GetResource())
self.addCleanup(stack.pop_all().close)
That's the most robust approach, since it correctly handles acquisition of multiple resources:
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource1 = stack.enter_context(GetResource())
self._resource2 = stack.enter_context(GetOtherResource())
self.addCleanup(stack.pop_all().close)
Here, if GetOtherResource() fails, the first resource will be cleaned up immediately by the with statement, while if it succeeds, the pop_all() call will postpone the cleanup until the registered cleanup function runs.
If you know you're only ever going to have one resource to manage, you can skip the with statement:
def setUp(self):
stack = contextlib.ExitStack()
self._resource = stack.enter_context(GetResource())
self.addCleanup(stack.close)
However, that's a bit more error prone, since if you add more resources to the stack without first switching to the with statement based version, successfully allocated resources may not get cleaned up promptly if later resource acquisitions fail.
You can also write something comparable using a custom tearDown() implementation by saving a reference to the resource stack on the test case:
def setUp(self):
with contextlib.ExitStack() as stack:
self._resource1 = stack.enter_context(GetResource())
self._resource2 = stack.enter_context(GetOtherResource())
self._resource_stack = stack.pop_all()
def tearDown(self):
self._resource_stack.close()
Alternatively, you can also define a custom cleanup function that accesses the resource via a closure reference, avoiding the need to store any extra state on the test case purely for cleanup purposes:
def setUp(self):
with contextlib.ExitStack() as stack:
resource = stack.enter_context(GetResource())
def cleanup():
if necessary:
one_last_chance_to_use(resource)
stack.pop_all().close()
self.addCleanup(cleanup)
pytest fixtures are very close to your idea/style, and allow for exactly what you want:
import pytest
from code.to.test import foo
#pytest.fixture(...)
def resource():
with your_context_manager as r:
yield r
def test_foo(resource):
assert foo(resource).bar() == 42
The problem with calling __enter__ and __exit__ as you did, is not that you have done so: they can be called outside of a with statement. The problem is that your code has no provision to call the object's __exit__ method properly if an exception occurs.
So, the way to do it is to have a decorator that will wrap the call to your original method in a withstatement. A short metaclass can apply the decorator transparently to all methods named test* in the class -
# -*- coding: utf-8 -*-
from functools import wraps
import unittest
def setup_context(method):
# the 'wraps' decorator preserves the original function name
# otherwise unittest would not call it, as its name
# would not start with 'test'
#wraps(method)
def test_wrapper(self, *args, **kw):
with GetSlot() as slot:
self._slot = slot
result = method(self, *args, **kw)
delattr(self, "_slot")
return result
return test_wrapper
class MetaContext(type):
def __new__(mcs, name, bases, dct):
for key, value in dct.items():
if key.startswith("test"):
dct[key] = setup_context(value)
return type.__new__(mcs, name, bases, dct)
class GetSlot(object):
def __enter__(self):
return self
def __exit__(self, *args, **kw):
print "exiting object"
def doStuff(self):
print "doing stuff"
def doOtherStuff(self):
raise ValueError
def getSomething(self):
return "a value"
def UnderTest(*args):
return args[0]
class MyTestCase(unittest.TestCase):
__metaclass__ = MetaContext
def testFirstThing(self):
u = UnderTest(self._slot)
u.doStuff()
self.assertEqual(u.getSomething(), 'a value')
def testSecondThing(self):
u = UnderTest(self._slot)
u.doOtherStuff()
self.assertEqual(u.getSomething(), 'a value')
unittest.main()
(I also included mock implementations of "GetSlot" and the methods and functions in your example so that I myself could test the decorator and metaclass I am suggesting on this answer)
I'd argue you should separate your test of the context manager from your test of the Slot class. You could even use a mock object simulating the initialize/finalize interface of slot to test the context manager object, and then test your slot object separately.
from unittest import TestCase, main
class MockSlot(object):
initialized = False
ok_called = False
error_called = False
def initialize(self):
self.initialized = True
def finalize_ok(self):
self.ok_called = True
def finalize_error(self):
self.error_called = True
class GetSlot(object):
def __init__(self, slot_factory=MockSlot):
self.slot_factory = slot_factory
def __enter__(self):
s = self.s = self.slot_factory()
s.initialize()
return s
def __exit__(self, type, value, traceback):
if type is None:
self.s.finalize_ok()
else:
self.s.finalize_error()
class TestContextManager(TestCase):
def test_getslot_calls_initialize(self):
g = GetSlot()
with g as slot:
pass
self.assertTrue(g.s.initialized)
def test_getslot_calls_finalize_ok_if_operation_successful(self):
g = GetSlot()
with g as slot:
pass
self.assertTrue(g.s.ok_called)
def test_getslot_calls_finalize_error_if_operation_unsuccessful(self):
g = GetSlot()
try:
with g as slot:
raise ValueError
except:
pass
self.assertTrue(g.s.error_called)
if __name__ == "__main__":
main()
This makes code simpler, prevents concern mixing and allows you to reuse the context manager without having to code it in many places.

Categories

Resources