How to add/change a variable across django test functions - python

This question might be silly, but I did not find an answer.
I want to add a test function to a class of TestCase to check the completion of the test. For example, the urls were tested, the forms were tested et.al. As such, I would like to have a variable to keep the record of each test. If urls were tested, then VARIABLE["urls"] = True.
Unfortunately, it looks like all the variable were reset in each test function. The message recorded in urls test VARIABLE["urls"] can not been carried on to one other test. Is there any way to have a global variable across all test functions?
Here are the revised working code
class Test(TestCase):
test = {}
to_be_test = ["urls","ajax","forms","templates"]
def test_urls(self):
...
self.test['urls'] = True
def test_ajax(self):
...
self.test['ajax'] = True
def test_z_completion(self):
for t in self.to_be_test:
if not t in self.test:
print "Warning: %s test is missing!" % t
The expected result should be:
Warning: forms test is missing!
Warning: templates test is missing!

How about a class level attribute?
import unittest
class FakeTest(unittest.TestCase):
cl_att = []
def test_a1(self):
self.assert_(True)
self.cl_att.append('a1')
print "cl_att:", self.cl_att
def test_a2(self):
self.assert_(True)
self.cl_att.append('a2')
print "cl_att:", self.cl_att
if __name__ == "__main__":
unittest.main()

Related

Can I programmatically parametrize return value of a pytest fixture?

I am working on some functional tests for my application. Depending on the logged user's permissions, the sidebar will have different links. I am parameterizing them (hard coded) and running a test that works well (app is a webtest app):
endpoints = [
'/',
'/endpoint1',
'endpoint2',
...
]
#pytest.mark.parametrize('endpoint', endpoints)
def test_endpoints(endpoint, app):
res = app.get(endpoint).maybe_follow()
assert res.status_code == 200
I would like to avoid having to hard code the list of links for each type of user. Inside a fixture I can actually get them programmatically, so ideally I would like to parametrize the return value of this fixture in order to run the test function above:
#pytest.fixture
def endpoints(app):
res = app.get('/login').follow()
sidebar_links = []
for link in res.html.ul.find_all('a'):
if link.has_attr('href') and not link['href'].startswith('#'):
sidebar_links.append(link['href'])
return sidebar_links
Is this possible?
I would suggest that you use the pytest_configure() hook instead as this method will run before all your test methods. in conftest.py file you can keep a global variable as pytest.endpoints= [] then later in the hook method keep on appending the value of endpoints to this variable
something like this
pytest.endpoints= []
def pytest_configure(config,app):
res = app.get('/login').follow()
for link in res.html.ul.find_all('a'):
if link.has_attr('href') and not link['href'].startswith('#'):
pytest.endpoints.append(link['href'])
within the test method use the same variable as a parameter like below
#pytest.mark.parametrize("endpoint",pytest.endpoints)
def test_endpoints(endpoint):
Well i am not completely aware of your design so i cannot suggest any further but you can give this a try.

Python unit testing with PyTest, stuck on more advanced testing

I have been using PyTest for some time now to write some simple tests (like the ones you find in tutorials and youtube video's) and I thought now it was time to start writing actual test for our python scripts. The scripts are way more advanced than any shown in tutorials so I am getting a bit stuck. I do not want the entire correct answer, but rather a nudge in the right direction if possible. Here is my issue:
We have a script that reads a .md text file and converts it to a pdf file based on an external template. Part of the script is here below (I removed most of it because I first just want to have 1 running test)
class DocumentationEngine:
def __init__(self, title, subtitle, series, style='TIIStyle_Digital_Aug_2020', templateFile='template.docet', tableOfContents=True, listOfFigures=False, listOfTables=False):
self.title = title
self.subtitle = subtitle
self.series = series
self.style = style
self.template = {}
self.hasTOC = tableOfContents
self.hasLOF = listOfFigures
self.hasLOT = listOfTables
self.loadTemplate(templateFile)
def loadTemplate(self, file='template.docet'):
with open(file, "r") as templatefile:
lines = templatefile.readlines()
key = "dummy"
value = ""
for line in lines:
line = line.strip()
if line.startswith('[') and line.endswith(']'):
self.template[key] = value
key = line[1:-1]
value = ""
else:
value += line + '\n'
def build(self, versions=[], content='', filename='Documenter\\_Autogenerated'):
document = self.template["doc"]
document = document.replace("%%style%%", self.style)
document = document.replace("%%body%%",
self.buildFirstPage() +
self.buildTableOfContents() +
self.buildListOfFigures() +
self.buildListOfTables() +
self.buildVersionTable(versions, filename) +
self.buildContentPages(content=content) +
self.buildLastPage()
)
return document
def buildLastPage(self):
return self.template["last_page"]
I am trying to write a simple unit test for the buildLastPage method and have been stuck for several days now.
I am not sure whether or not I need to mock the template file, use a fixture and/or if I can actually test only that method with all dependencies.
I started with the following:
from doceng import DocumentationEngine
import pytest
class Test:
def test_buildLastPage(self):
build_last_page = DocumentationEngine()
assert build_last_page.template(1) == 1
which gives me an error regarding 3 required arguments. When adding the arguments like this:
from doceng import DocumentationEngine
import pytest
class Test:
def test_buildLastPage(self, title, subtitle, series):
build_last_page = DocumentationEngine()
assert build_last_page.template(1) == 1
which gives me an error that the fixture is not found.
I added a fixture in conftest.py file like this:
import pytest
from doceng import DocumentationEngine
#pytest.fixture
def title(title):
return title("test")
which will get me another error, recursive dependency involving fixture 'title' detected
I'm quite stuck so any nudge in the right direction for a newbie would be highly appreciated
The error of the fixtures is regarding your test function test_buildLastPage. The way you are using it, it only needs the self argument.
A test function in pytest without any decorators always expects to find fixtures, that have the same name as the arguments. You did not define any fixtures and also do not use the arguments in your function. Therefore, you can remove them safely.
The actual error points DocumentationEngine(). The class expect 3 arguments when initializing the object. You set no arguments. Check your __init__ function again to find the proper arguments.

AttributeError when using request.function in pytest yield fixture

I have several pytest test cases that need nearly identical setup, so I would like to have them reuse a fixture to keep things DRY. The setup involves creating a new ticket in an external ticket tracking system, then the test cases interact with the ticket based on the data, and finally the fixture cleans up by closing out the ticket. The challenge here is that each test case needs slightly different data to be prepped in the ticket.
Each test case has different calls and different assertions, so I can't combine them all into a single parametrized test case with a single test fixture. Parametrizing the the fixture itself would result in every test case running every permutation of the fixture data, which ends up with a lot of irrelevant test failures.
What I would like to do is set a variable in the test case, then have the fixture use that variable to set up the test data when creating the ticket. I've tried to use request.function as specified in the pytest fixture docs but I keep getting:
=================================== ERRORS ===================================
____________________ ERROR at setup of TestMCVE.test_stuff ___________________
request = <SubRequest 'ticket' for <Function 'test_stuff'>>
#pytest.yield_fixture
def ticket(request):
> ticket_summary = getattr(request.function, "summary")
E AttributeError: 'function' object has no attribute 'summary'
tests\test_mcve.py:11: AttributeError
My code is:
import pytest
def ticket_system_api(summary):
# stub for MCVE purposes
return summary
#pytest.yield_fixture
def ticket(request):
ticket_summary = getattr(request.function, "summary")
new_ticket = ticket_system_api(summary=ticket_summary)
yield new_ticket
class TestMCVE:
def test_stuff(self, ticket):
summary = 'xyz'
# do real things here, except MCVE
assert 'xyz' == ticket
I've tried using request.node instead of request.function as well as binding the summary variable per this answer, changing summary = 'xyz' to test_stuff.summary = 'xyz' but these still fail with the same AttributeError.
How can I pass the function level data to the fixture?
You can accomplish this with indirect parametrization. The API (and the documentation) could be friendlier, but the functionality you want is there.
Your example was very close, and minor tweaks were needed. Take a look:
import pytest
def ticket_system_api(summary):
# stub for MCVE purposes
return summary
#pytest.fixture
def ticket(request):
# NOTE: This will raise `AttributeError` if the fixture
# doesn't receive a parameter.
ticket_summary = request.param
new_ticket = ticket_system_api(summary=ticket_summary)
return new_ticket
class TestMCVE:
#pytest.mark.parametrize('ticket', ('abc',), indirect=True)
def test_abc(self, ticket):
# do real things here, except MCVE
assert ticket == 'abc'
#pytest.mark.parametrize('ticket', ('xyz',), indirect=True)
def test_xyz(self, ticket):
# do real things here, except MCVE
assert ticket == 'xyz'

pytest fixture to introspect calling function

I have a test class and a setup function that looks like this:
#pytest.fixture(autouse=True, scope='function')
def setup(self, request):
self.client = MyClass()
first_patcher = patch('myclass.myclass.function_to_patch')
first_mock = first_patcher.start()
first_mock.return_value = 'foo'
value_to_return = getattr(request, 'value_name', None)
second_patcher = patch('myclass.myclass.function_two')
second_mock = second_patcher.start()
second_mock.return_value = value_to_return
#could clean up my mocks here, but don't care right now
I see in the documentation for pytest, that introspection can be done for a module level value:
val = getattr(request.module, 'val_name', None)
But, I want to be able to specify different values to return based on the test I am in. So I am looking for a way to introspect the test_function not the test_module.
http://pytest.org/latest/fixture.html#fixtures-can-introspect-the-requesting-test-context
You can use request.function to get to the test function. Just follow the link on the b wepage you referenced to see what is available on the test request object :)
Maybe the documentation has changed since the time of the accepted answer.
At least for me it was not clear how to
Just follow the link
So I thought I'd update this thread with the link itself:
https://pytest.org/en/6.2.x/reference.html#request
Edit December 2021
Even when the link is correct now I think this statement from the pytest documentation is just not correct:
Fixture functions can accept the request object to introspect the “requesting” test function ...
While I found some examples for getting attributes of the module I did not find a single working example of introspecting the test function that requests the fixture. May be related to collection and runtime order.
What really helped me to get the desired behavior was to use the factory idiom a little further down in the pytest documentation:
https://pytest.org/en/6.2.x/fixture.html#factories-as-fixtures
Set up the fixture factory
#pytest.fixture(scope='function')
def getQueryResult() -> object:
def _impl(_mrId: int = 7622):
return QueryResult(_mrId)
return _impl
Usage
# Concrete value
def test_foo(getQueryResult):
queryResult = getQueryResult(4711)
...
# Default value
def test_bar(getQueryResult):
queryResult = getQueryResult()
...

Python unittest.TestCase execution order

Is there a way in Python unittest to set the order in which test cases are run?
In my current TestCase class, some testcases have side effects that set conditions for the others to run properly. Now I realize the proper way to do this is to use setUp() to do all setup related things, but I would like to implement a design where each successive test builds slightly more state that the next can use. I find this much more elegant.
class MyTest(TestCase):
def test_setup(self):
# Do something
def test_thing(self):
# Do something that depends on test_setup()
Ideally, I would like the tests to be run in the order they appear in the class. It appears that they run in alphabetical order.
Don't make them independent tests - if you want a monolithic test, write a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def _steps(self):
for name in dir(self): # dir() result is implicitly sorted
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self._steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e))
If the test later starts failing and you want information on all failing steps instead of halting the test case at the first failed step, you can use the subtests feature: https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests
(The subtest feature is available via unittest2 for versions prior to Python 3.4: https://pypi.python.org/pypi/unittest2 )
It's a good practice to always write a monolithic test for such expectations. However, if you are a goofy dude like me, then you could simply write ugly looking methods in alphabetical order so that they are sorted from a to b as mentioned in the Python documentation - unittest — Unit testing framework
Note that the order in which the various test cases will be run is
determined by sorting the test function names with respect to the
built-in ordering for strings
Example
def test_a_first():
print "1"
def test_b_next():
print "2"
def test_c_last():
print "3"
From unittest — Unit testing framework, section Organizing test code:
Note: The order in which the various tests will be run is determined by sorting the test method names with respect to the built-in ordering for strings.
So just make sure test_setup's name has the smallest string value.
Note that you should not rely on this behavior — different test functions are supposed to be independent of the order of execution. See ngcohlan's answer above for a solution if you explicitly need an order.
Another way that I didn't see listed in any related questions: Use a TestSuite.
Another way to accomplish ordering is to add the tests to a unitest.TestSuite. This seems to respect the order in which the tests are added to the suite using suite.addTest(...). To do this:
Create one or more TestCase subclasses,
class FooTestCase(unittest.TestCase):
def test_ten():
print('Testing ten (10)...')
def test_eleven():
print('Testing eleven (11)...')
class BarTestCase(unittest.TestCase):
def test_twelve():
print('Testing twelve (12)...')
def test_nine():
print('Testing nine (09)...')
Create a callable test-suite generation added in your desired order, adapted from the documentation and this question:
def suite():
suite = unittest.TestSuite()
suite.addTest(BarTestCase('test_nine'))
suite.addTest(FooTestCase('test_ten'))
suite.addTest(FooTestCase('test_eleven'))
suite.addTest(BarTestCase('test_twelve'))
return suite
Execute the test-suite, e.g.,
if __name__ == '__main__':
runner = unittest.TextTestRunner(failfast=True)
runner.run(suite())
For context, I had a need for this and wasn't satisfied with the other options. I settled on the above way of doing test ordering.
I didn't see this TestSuite method listed any of the several "unit-test ordering questions" (e.g., this question and others including execution order, or changing order, or tests order).
I ended up with a simple solution that worked for me:
class SequentialTestLoader(unittest.TestLoader):
def getTestCaseNames(self, testCaseClass):
test_names = super().getTestCaseNames(testCaseClass)
testcase_methods = list(testCaseClass.__dict__.keys())
test_names.sort(key=testcase_methods.index)
return test_names
And then
unittest.main(testLoader=utils.SequentialTestLoader())
A simple and flexible way is to assign a comparator function to unittest.TestLoader.sortTestMethodsUsing:
Function to be used to compare method names when sorting them in getTestCaseNames() and all the loadTestsFrom*() methods.
Minimal usage:
import unittest
class Test(unittest.TestCase):
def test_foo(self):
""" test foo """
self.assertEqual(1, 1)
def test_bar(self):
""" test bar """
self.assertEqual(1, 1)
if __name__ == "__main__":
test_order = ["test_foo", "test_bar"] # could be sys.argv
loader = unittest.TestLoader()
loader.sortTestMethodsUsing = lambda x, y: test_order.index(x) - test_order.index(y)
unittest.main(testLoader=loader, verbosity=2)
Output:
test_foo (__main__.Test)
test foo ... ok
test_bar (__main__.Test)
test bar ... ok
Here's a proof of concept for running tests in source code order instead of the default lexical order (output is as above).
import inspect
import unittest
class Test(unittest.TestCase):
def test_foo(self):
""" test foo """
self.assertEqual(1, 1)
def test_bar(self):
""" test bar """
self.assertEqual(1, 1)
if __name__ == "__main__":
test_src = inspect.getsource(Test)
unittest.TestLoader.sortTestMethodsUsing = lambda _, x, y: (
test_src.index(f"def {x}") - test_src.index(f"def {y}")
)
unittest.main(verbosity=2)
I used Python 3.8.0 in this post.
Tests which really depend on each other should be explicitly chained into one test.
Tests which require different levels of setup, could also have their corresponding setUp() running enough setup - various ways thinkable.
Otherwise unittest handles the test classes and test methods inside the test classes in alphabetical order by default (even when loader.sortTestMethodsUsing is None). dir() is used internally which sorts by guarantee.
The latter behavior can be exploited for practicability - e.g. for having the latest-work-tests run first to speed up the edit-testrun-cycle.
But that behavior should not be used to establish real dependencies. Consider that tests can be run individually via command-line options etc.
One approach can be to let those sub tests be not be treated as tests by the unittest module by appending _ in front of them and then building a test case which builds on the right order of these sub-operations executed.
This is better than relying on the sorting order of unittest module as that might change tomorrow and also achieving topological sort on the order will not be very straightforward.
An example of this approach, taken from here (Disclaimer: my own module), is as below.
Here, test case runs independent tests, such as checking for table parameter not set (test_table_not_set) or test for primary key (test_primary_key) still in parallel, but a CRUD test makes sense only if done in right order and state set by previous operations. Hence those tests have been rather made just separate unit, but not test. Another test (test_CRUD) then builds a right order of those operations and tests them.
import os
import sqlite3
import unittest
from sql30 import db
DB_NAME = 'review.db'
class Reviews(db.Model):
TABLE = 'reviews'
PKEY = 'rid'
DB_SCHEMA = {
'db_name': DB_NAME,
'tables': [
{
'name': TABLE,
'fields': {
'rid': 'uuid',
'header': 'text',
'rating': 'int',
'desc': 'text'
},
'primary_key': PKEY
}]
}
VALIDATE_BEFORE_WRITE = True
class ReviewTest(unittest.TestCase):
def setUp(self):
if os.path.exists(DB_NAME):
os.remove(DB_NAME)
def test_table_not_set(self):
"""
Tests for raise of assertion when table is not set.
"""
db = Reviews()
try:
db.read()
except Exception as err:
self.assertIn('No table set for operation', str(err))
def test_primary_key(self):
"""
Ensures, primary key is honored.
"""
db = Reviews()
db.table = 'reviews'
db.write(rid=10, rating=5)
try:
db.write(rid=10, rating=4)
except sqlite3.IntegrityError as err:
self.assertIn('UNIQUE constraint failed', str(err))
def _test_CREATE(self):
db = Reviews()
db.table = 'reviews'
# backward compatibility for 'write' API
db.write(tbl='reviews', rid=1, header='good thing', rating=5)
# New API with 'create'
db.create(tbl='reviews', rid=2, header='good thing', rating=5)
# Backward compatibility for 'write' API, without tbl,
# explicitly passed
db.write(tbl='reviews', rid=3, header='good thing', rating=5)
# New API with 'create', without table name explicitly passed.
db.create(tbl='reviews', rid=4, header='good thing', rating=5)
db.commit() # Save the work.
def _test_READ(self):
db = Reviews()
db.table = 'reviews'
rec1 = db.read(tbl='reviews', rid=1, header='good thing', rating=5)
rec2 = db.read(rid=1, header='good thing')
rec3 = db.read(rid=1)
self.assertEqual(rec1, rec2)
self.assertEqual(rec2, rec3)
recs = db.read() # Read all
self.assertEqual(len(recs), 4)
def _test_UPDATE(self):
db = Reviews()
db.table = 'reviews'
where = {'rid': 2}
db.update(condition=where, header='average item', rating=2)
db.commit()
rec = db.read(rid=2)[0]
self.assertIn('average item', rec)
def _test_DELETE(self):
db = Reviews()
db.table = 'reviews'
db.delete(rid=2)
db.commit()
self.assertFalse(db.read(rid=2))
def test_CRUD(self):
self._test_CREATE()
self._test_READ()
self._test_UPDATE()
self._test_DELETE()
def tearDown(self):
os.remove(DB_NAME)
you can start with:
test_order = ['base']
def index_of(item, list):
try:
return list.index(item)
except:
return len(list) + 1
2nd define the order function:
def order_methods(x, y):
x_rank = index_of(x[5:100], test_order)
y_rank = index_of(y[5:100], test_order)
return (x_rank > y_rank) - (x_rank < y_rank)
3rd set it in the class:
class ClassTests(unittest.TestCase):
unittest.TestLoader.sortTestMethodsUsing = staticmethod(order_methods)
ncoghlan's answer was exactly what I was looking for when I came to this question. I ended up modifying it to allow each step-test to run, even if a previous step had already thrown an error; this helps me (and maybe you!) to discover and plan for the propagation of error in multi-threaded database-centric software.
class Monolithic(TestCase):
def step1_testName1(self):
...
def step2_testName2(self):
...
def steps(self):
'''
Generates the step methods from their parent object
'''
for name in sorted(dir(self)):
if name.startswith('step'):
yield name, getattr(self, name)
def test_steps(self):
'''
Run the individual steps associated with this test
'''
# Create a flag that determines whether to raise an error at
# the end of the test
failed = False
# An empty string that the will accumulate error messages for
# each failing step
fail_message = ''
for name, step in self.steps():
try:
step()
except Exception as e:
# A step has failed, the test should continue through
# the remaining steps, but eventually fail
failed = True
# Get the name of the method -- so the fail message is
# nicer to read :)
name = name.split('_')[1]
# Append this step's exception to the fail message
fail_message += "\n\nFAIL: {}\n {} failed ({}: {})".format(name,
step,
type(e),
e)
# Check if any of the steps failed
if failed is True:
# Fail the test with the accumulated exception message
self.fail(fail_message)
I also wanted to specify a particular order of execution to my tests. The main differences to other answers in here are:
I wanted to perverse a more verbose test
method name without replacing whole name with step1, step2 etc.
I also wanted the printed method execution in the console to have some granularity apposed to using a Monolithic solution in some of the other answers.
So for the execution for monolithic test method is looked like this:
test_booking (__main__.TestBooking) ... ok
I wanted:
test_create_booking__step1 (__main__.TestBooking) ... ok
test_process_booking__step2 (__main__.TestBooking) ... ok
test_delete_booking__step3 (__main__.TestBooking) ... ok
How to achieve this
I provided a suffix to my method name with the __step<order> for example (order of definition is not important):
def test_create_booking__step1(self):
[...]
def test_delete_booking__step3(self):
[...]
def test_process_booking__step2(self):
[...]
For the test suite override the __iter__ function which will build an iterator for the test methods.
class BookingTestSuite(unittest.TestSuite):
""" Extends the functionality of the the standard test suites """
def __iter__(self):
for suite in self._tests:
suite._tests = sorted(
[x for x in suite._tests if hasattr(x, '_testMethodName')],
key = lambda x: int(x._testMethodName.split("step")[1])
)
return iter(self._tests)
This will sort test methods into order and execute them accordingly.

Categories

Resources