ValueError: no such test method in <class 'myapp.tests.SessionTestCase'>: runTest - python

I have a test case:
class LoginTestCase(unittest.TestCase):
...
I'd like to use it in a different test case:
class EditProfileTestCase(unittest.TestCase):
def __init__(self):
self.t = LoginTestCase()
self.t.login()
This raises:
ValueError: no such test method in <class 'LoginTest: runTest`
I looked at the unittest code where the exception is being called, and it looks like the tests aren't supposed to be written this way. Is there a standard way to write something you'd like tested so that it can be reused by later tests? Or is there a workaround?
I've added an empty runTest method to LoginTest as a dubious workaround for now.

The confusion with "runTest" is mostly based on the fact that this works:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
unittest.main()
So there is no "runTest" in that class and all of the test-functions are being called. However if you look at the base class "TestCase" (lib/python/unittest/case.py) then you will find that it has an argument "methodName" that defaults to "runTest" but it does NOT have a default implementation of "def runTest"
class TestCase:
def __init__(self, methodName='runTest'):
The reason that unittest.main works fine is based on the fact that it does not need "runTest" - you can mimic the behaviour by creating a TestCase-subclass instance for all methods that you have in your subclass - just provide the name as the first argument:
class MyTest(unittest.TestCase):
def test_001(self):
print "ok"
if __name__ == "__main__":
suite = unittest.TestSuite()
for method in dir(MyTest):
if method.startswith("test"):
suite.addTest(MyTest(method))
unittest.TextTestRunner().run(suite)

Here's some 'deep black magic':
suite = unittest.TestLoader().loadTestsFromTestCase(Test_MyTests)
unittest.TextTestRunner(verbosity=3).run(suite)
Very handy if you just want to test run your unit tests from a shell (i.e., IPython).

If you don't mind editing unit test module code directly, the simple fix is to add under case.py class TestCase a new method called runTest that does nothing.
The file to edit sits under pythoninstall\Lib\unittest\case.py
def runTest(self):
pass
This will stop you ever getting this error.

Guido's answer is almost there, however it doesn't explain the thing. I needed to look to unittest code to grasp the flow.
Say you have the following.
import unittest
class MyTestCase(unittest.TestCase):
def testA(self):
pass
def testB(self):
pass
When you use unittest.main(), it will try to discover test cases in current module. The important code is unittest.loader.TestLoader.loadTestsFromTestCase.
def loadTestsFromTestCase(self, testCaseClass):
# ...
# This will look in class' callable attributes that start
# with 'test', and return their names sorted.
testCaseNames = self.getTestCaseNames(testCaseClass)
# If there's no test to run, look if the case has the default method.
if not testCaseNames and hasattr(testCaseClass, 'runTest'):
testCaseNames = ['runTest']
# Create TestSuite instance having test case instance per test method.
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
return loaded_suite
What the latter does, is converting test case class into test suite, that holds the instances of the class per its test method. I.e. my example will be turned into unittest.suite.TestSuite([MyTestCase('testA'), MyTestCase('testB')]). So if you would like to create a test case manually, you need to do the same thing.

#dmvianna's answer got me very close to being able to run unittest in a jupyter (ipython) notebook, but I had to do a bit more. If I wrote just the following:
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods)
unittest.TextTestRunner().run(suite)
I got
Ran 0 tests in 0.000s
OK
It's not broken, but it doesn't run any tests! If I instantiated the test class
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
(note the parens at the end of the line; that's the only change) I got
ValueError Traceback (most recent call last)
in ()
----> 1 suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
/usr/lib/python2.7/unittest/case.pyc in init(self, methodName)
189 except AttributeError:
190 raise ValueError("no such test method in %s: %s" %
--> 191 (self.class, methodName))
192 self._testMethodDoc = testMethod.doc
193 self._cleanups = []
ValueError: no such test method in : runTest
The fix is now reasonably clear: add runTest to the test class:
class TestStringMethods(unittest.TestCase):
def runTest(self):
test_upper (self)
test_isupper (self)
test_split (self)
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
suite = unittest.TestLoader().loadTestsFromModule (TestStringMethods())
unittest.TextTestRunner().run(suite)
Ran 3 tests in 0.002s
OK
It also works correctly (and runs 3 tests) if my runTest just passes, as suggested by #Darren.
This is a little yucchy, requiring some manual labor on my part, but it's also more explicit, and that's a Python virtue, isn't it?
I could not get any of the techniques via calling unittest.main with explicit arguments from here or from this related question Unable to run unittest's main function in ipython/jupyter notebook to work inside a jupyter notebook, but I am back on the road with a full tank of gas.

unittest does deep black magic -- if you choose to use it to run your unit-tests (I do, since this way I can use a very powerful battery of test runners &c integrated into the build system at my workplace, but there are definitely worthwhile alternatives), you'd better play by its rules.
In this case, I'd simply have EditProfileTestCase derive from LoginTestCase (rather than directly from unittest.TestCase). If there are some parts of LoginTestCase that you do want to also test in the different environment of EditProfileTestCase, and others that you don't, it's a simple matter to refactor LoginTestCase into those two parts (possibly using multiple inheritance) and if some things need to happen slightly differently in the two cases, factor them out into auxiliary "hook methods" (in a "Template Method" design pattern) -- I use all of these approaches often to diminish boilerplate and increase reuse in the copious unit tests I always write (if I have unit-test coverage < 95%, I always feel truly uneasy -- below 90%, I start to feel physically sick;-).

Related

Using MagicMock to test the inner components of a method

I'm new to using the mock library and unit testing in general. I think I understand how the mock library works, but I think there is something wrong with my approach.
Lets say I have a class Foo that has a method addFoo(bar1, bar2) and addFoo calls other 'private' methods within foo, but can also raise exceptions at various parts in the code... i.e.
class Foo:
def __init__(self):
# This creates lots of dependencies
def _inner1(self, bar):
if this_doesnt_work:
return None
return modified_bar1
def _inner2(self, bar2):
if this_doesnt_work:
return None
return modified_bar2
def addFoo(bar1, bar2):
if self._inner1(bar1) and self._inner2(bar2):
#do something
else:
raise SomeException
When I unit test, what I currently do is mock out Foo.__init__ in my unit test setup and set return_value to None. This works and allows me to run tests against my methods in the class without all the external dependencies and just mock any class attributes with my mocked Foo.__init__ instance.
After that is done if I wanted to unit test addFoo I would do the following (skipping a bunch of steps):
#patch.object(Foo, '_inner2')
#patch.object(Foo, '_inner1')
def test_inner1_exception(self, mock_inner1, mock_inner2):
mock_inner1.side_effect = Exception
# then do some asserts to make sure it worked
QUESTION
How do I test an exception in addFoo if the code looks like this:
def addFoo(bar1, bar2):
bars = self._getBars() #something that returns a list
my_modified_bar_list = []
for bar in bars:
my_modified_bar_list.append(self._inner1(bar))
if len(my_modified_bar_list) == 0:
raise SomeException
In this instance how do I test SomeException when it is dependent upon the values produced by some array within my method?
If you are making use of pytest you can assert about expected exceptions as follows:
#patch('Foo._getBars', return_value=[])
def test_addFoo_exception(self):
foo = Foo()
with pytest.raises(SomeException):
foo.addFoo(Mock(), Mock())
As snippet above shows, by mocking _getBars you are in control of what it is returned and can therefore force the condition that triggers the exception in your code, in this case an empty list.
I have written a short post with common unit testing pitfalls in Python that you might find useful.

Run code before and after each test in py.test?

I want to run additional setup and teardown checks before and after each test in my test suite. I've looked at fixtures but not sure on whether they are the correct approach. I need to run the setup code prior to each test and I need to run the teardown checks after each test.
My use-case is checking for code that doesn't cleanup correctly: it leaves temporary files. In my setup, I will check the files and in the teardown I also check the files. If there are extra files I want the test to fail.
py.test fixtures are a technically adequate method to achieve your purpose.
You just need to define a fixture like that:
#pytest.fixture(autouse=True)
def run_around_tests():
# Code that will run before your test, for example:
files_before = # ... do something to check the existing files
# A test function will be run at this point
yield
# Code that will run after your test, for example:
files_after = # ... do something to check the existing files
assert files_before == files_after
By declaring your fixture with autouse=True, it will be automatically invoked for each test function defined in the same module.
That said, there is one caveat. Asserting at setup/teardown is a controversial practice. I'm under the impression that the py.test main authors do not like it (I do not like it either, so that may colour my own perception), so you might run into some problems or rough edges as you go forward.
You can use a fixture in order to achieve what you want.
import pytest
#pytest.fixture(autouse=True)
def run_before_and_after_tests(tmpdir):
"""Fixture to execute asserts before and after a test is run"""
# Setup: fill with any logic you want
yield # this is where the testing happens
# Teardown : fill with any logic you want
Detailed Explanation
#pytest.fixture(autouse=True), from the docs: "Occasionally, you may want to have fixtures get invoked automatically without declaring a function argument explicitly or a usefixtures decorator." Therefore, this fixture will run every time a test is executed.
# Setup: fill with any logic you want, this logic will be executed before every test is actually run. In your case, you can add your assert statements that will be executed before the actual test.
yield, as indicated in the comment, this is where testing happens
# Teardown : fill with any logic you want, this logic will be executed after every test. This logic is guaranteed to run regardless of what happens during the
tests.
Note: in pytest there is a difference between a failing test and an error while executing a test. A Failure indicates that the test failed in some way.
An Error indicates that you couldn't get to the point of doing a proper test.
Consider the following examples:
Assert fails before test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert False # This will generate an error when running tests
yield
assert True
def test():
assert True
Assert fails after test is run -> ERROR
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert False
def test():
assert True
Test fails -> FAILED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert False
Test passes -> PASSED
import pytest
#pytest.fixture(autouse=True)
def run_around_tests():
assert True
yield
assert True
def test():
assert True
Fixtures are exactly what you want.
That's what they are designed for.
Whether you use pytest style fixtures, or setup and teardown (module, class, or method level) xUnit style fixtures, depends on the circumstance and personal taste.
From what you are describing, it seems like you could use pytest autouse fixtures.
Or xUnit style function level setup_function()/teardown_function().
Pytest has you completely covered. So much so that perhaps it's a fire hose of information.
You can use Module level setup/teardown Fixtures of Pytest.
Here's the Link
http://pytest.org/latest/xunit_setup.html
It Works as follows:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method."""
Test_Class():
def test_01():
#test 1 Code
It will call setup_module before this test and teardown_module after test completes.
You can include this fixture in each test-script to run it for each test.
IF you want to use something that is common to all tests in a directory You can use package/directory level fixtures nose framework
http://pythontesting.net/framework/nose/nose-fixture-reference/#package
In __init__.py file of the package you can include following
def setup_package():
'''Set up your environment for test package'''
def teardown_package():
'''revert the state '''
You may use decorators but programatically, so you don't need to put the decorator in each method.
I'm assuming several things in next code:
The test methods are all named like: "testXXX()"
The decorator is added to the same module where test methods are implemented.
def test1():
print ("Testing hello world")
def test2():
print ("Testing hello world 2")
#This is the decorator
class TestChecker(object):
def __init__(self, testfn, *args, **kwargs):
self.testfn = testfn
def pretest(self):
print ('precheck %s' % str(self.testfn))
def posttest(self):
print ('postcheck %s' % str(self.testfn))
def __call__(self):
self.pretest()
self.testfn()
self.posttest()
for fn in dir() :
if fn.startswith('test'):
locals()[fn] = TestChecker(locals()[fn])
Now if you call the test methods...
test1()
test2()
The output should be something like:
precheck <function test1 at 0x10078cc20>
Testing hello world
postcheck <function test1 at 0x10078cc20>
precheck <function test2 at 0x10078ccb0>
Testing hello world 2
postcheck <function test2 at 0x10078ccb0>
If you have test methods as class methods, the approach is also valid. For instance:
class TestClass(object):
#classmethod
def my_test(cls):
print ("Testing from class method")
for fn in dir(TestClass) :
if not fn.startswith('__'):
setattr(TestClass, fn, TestChecker(getattr(TestClass, fn)))
The call to TestClass.my_test() will print:
precheck <bound method type.my_test of <class '__main__.TestClass'>>
Testing from class method
postcheck <bound method type.my_test of <class '__main__.TestClass'>>
It is an old question but I personally found another way from the docs :
Use the pytest.ini file :
[pytest]
usefixtures = my_setup_and_tear_down
import pytest
#pytest.fixture
def my_setup_and_tear_down():
# SETUP
# Write here the logic that you need for the setUp
yield # this statement will let the tests execute
# TEARDOWN
# Write here the logic that you need after each tests
About the yield statement and how it allows to run the test : HERE
Fixtures by default have scope=function. So, if you just use a definition such as
#pytest.fixture
def fixture_func(self)
It defaults to (scope='function').
So any finalizers in the fixture function will be called after each test.

Is there a downside for using __init__(self) instead of setup(self) for a nose test class?

Running nosetests -s for
class TestTemp():
def __init__(self):
print '__init__'
self.even = 0
def setup(self):
print '__setup__'
self.odd = 1
def test_even(self):
print 'test_even'
even_number = 10
assert even_number % 2 == self.even
def test_odd(self):
print 'test_odd'
odd_number = 11
assert odd_number % 2 == self.odd
prints out the following.
__init__
__init__
__setup__
test_even
.__setup__
test_odd
.
The test instances are created before tests are run, while setup runs right before the test.
For the general case, __init__() and setup() accomplish the same thing, but is there a downside for using __init__() instead of setup()? Or using both?
While __init__ may work as a replacement for setUp, you should stick to setUp because it is part of the stylized protocol for writing tests. It also has a counterpart, tearDown, which __init__ does not, as well as class- and module-level counterparts which __init__ does not.
Writing test classes is different than writing normal classes, so you should stick to the style used to write test classes.
Yes, you are supposed to create a clean slate for the tests, and keep your individual tests isolated.
It appears the test instances (one per test) are created in one batch, while setup is called right before each test. If your setup needs to reset external state, you'll need to do this in setup; if you were to do this in __init__ individual tests can screw up that external state for the rest of the test run.

How can I create a class-scoped test fixture in pyUnit?

I am unit testing mercurial integration and have a test class which currently creates a repository with a file and a clone of that repository in its setUp method and removes them in its tearDown method.
As you can probably imagine, this gets quite performance heavy very fast, especially if I have to do this for every test individually.
So what I would like to do is create the folders and initialize them for mercurial on loading the class, so each and every unittest in the TestCase class can use these repositories. Then when all the tests are run, I'd like to remove them. The only thing my setUp and tearDown methods then have to take care of is that the two repositories are in the same state between each test.
Basically what I'm looking for is a python equivalent of JUnit's #BeforeClass and #AfterClass annotations.
I've now done it by subclassing the TestSuite class, since the standard loader wraps all the test methods in an instance of the TestCase in which they're defined and puts them together in a TestSuite. I have the TestSuite call the before() and after() methods of the first TestCase. This of course means that you can't initialize any values to your TestCase object, but you probably want to do this in your setUp anyway.
The TestSuite looks like this:
class BeforeAfterSuite(unittest.TestSuite):
def run(self, result):
if len(self._tests) < 1:
return unittest.TestSuite.run(self, result)
first_test = self._tests[0]
if "before" in dir(first_test):
first_test.before()
result = unittest.TestSuite.run(self, result)
if "after" in dir(first_test):
first_test.after()
return result
For some slightly more finegrained control I also created the custom TestLoader which makes sure the BeforeAfterSuite is only used to wrap test-method-TestCase objects in, which looks like this:
class BeforeAfterLoader(unittest.TestLoader):
def loadTestsFromTestCase(self, testCaseClass):
self.suiteClass = BeforeAfterSuite
suite = unittest.TestLoader.loadTestsFromTestCase(self, testCaseClass)
self.suiteClass = unittest.TestLoader.suiteClass
return suite
Probably missing here is a try/except block around the before and after which could fail all the testcases in the suite or something.
from the Python unittest documentation :
setUpClass() :
A class method called before tests in an individual class run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def setUpClass(cls):
...
New in version 2.7.
tearDownClass() :
A class method called after tests in an individual class have run. tearDownClass is called with the class as the only argument and must be decorated as a classmethod():
#classmethod
def tearDownClass(cls):
...

How can I add a test method to a group of Django TestCase-derived classes?

I have a group of test cases that all should have exactly the same test done, along the lines of "Does method x return the name of an existing file?"
I thought that the best way to do it would be a base class deriving from TestCase that they all share, and simply add the test to that class. Unfortunately, the testing framework still tries to run the test for the base class, where it doesn't make sense.
class SharedTest(TestCase):
def x(self):
...do test...
class OneTestCase(SharedTest):
...my tests are performed, and 'SharedTest.x()'...
I tried to hack in a check to simply skip the test if it's called on an object of the base class rather than a derived class like this:
class SharedTest(TestCase):
def x(self):
if type(self) != type(SharedTest()):
...do test...
else:
pass
but got this error:
ValueError: no such test method in <class 'tests.SharedTest'>: runTest
First, I'd like any elegant suggestions for doing this. Second, though I don't really want to use the type() hack, I would like to understand why it's not working.
You could use a mixin by taking advantage that the test runner only runs tests inheriting from unittest.TestCase (which Django's TestCase inherits from.) For example:
class SharedTestMixin(object):
# This class will not be executed by the test runner (it inherits from object, not unittest.TestCase.
# If it did, assertEquals would fail , as it is not a method that exists in `object`
def test_common(self):
self.assertEquals(1, 1)
class TestOne(TestCase, SharedTestMixin):
def test_something(self):
pass
# test_common is also run
class TestTwo(TestCase, SharedTestMixin):
def test_another_thing(self):
pass
# test_common is also run
For more information on why this works do a search for python method resolution order and multiple inheritance.
I faced a similar problem. I couldn't prevent the test method in the base class being executed but I ensured that it did not exercise any actual code. I did this by checking for an attribute and returning immediately if it was set. This attribute was only set for the Base class and hence the tests ran everywhere else but the base class.
class SharedTest(TestCase):
def setUp(self):
self.do_not_run = True
def test_foo(self):
if getattr(self, 'do_not_run', False):
return
# Rest of the test body.
class OneTestCase(SharedTest):
def setUp(self):
super(OneTestCase, self).setUp()
self.do_not_run = False
This is a bit of a hack. There is probably a better way to do this but I am not sure how.
Update
As sdolan says a mixin is the right way. Why didn't I see that before?
Update 2
(After reading comments) It would be nice if (1) the superclass method could avoid the hackish if getattr(self, 'do_not_run', False): check; (2) if the number of tests were counted accurately.
There is a possible way to do this. Django picks up and executes all test classes in tests, be it tests.py or a package with that name. If the test superclass is declared outside the tests module then this won't happen. It can still be inherited by test classes. For instance SharedTest can be located in app.utils and then used by the test cases. This would be a cleaner version of the above solution.
# module app.utils.test
class SharedTest(TestCase):
def test_foo(self):
# Rest of the test body.
# module app.tests
from app.utils import test
class OneTestCase(test.SharedTest):
...

Categories

Resources