I want to use a mock object to emulate the database on a insert operation.
For example, let's suppose I have a method like insert(1) that calls a db connection object (let's call it db_obj) and performs some insert into mytable (col1) values (%s) where %s is 1.
What I had in mind: create some mock object for db_obj that stores the value of col1, so when db_obj.insert(1) is called, the mocked db_obj stores this col1=1 and, then, I can just get the mocked object col1 value and assert that it's 1 as expected.
Does this approach makes sense? If so, how can I do this using pytest?
Here's an example of what I'm trying
from hub.scripts.tasks.notify_job_module import http_success
from hub.scripts.tasks.notify_job_module import proceed_to
from hub.core.connector.mysql_db import MySql
from unittest.mock import patch
import logging
def proceed_to(conn,a,b,c,d,e):
conn.run_update('insert into table_abc (a,b,c,d,e) values (%s,%s,%s,%s,%s)',[a,b,c,d,e])
class MySql:
# this is the real conn implementation
def connect(self):
# ... create the database connection here
def run_update(self, query, params=None):
# ... perform some insert into the database here
class TestMySql:
# this is the mock implementation I want to get rid of
def connect(self):
pass
def run_update(self, query, params=None):
self.result = params
def get_result(self):
return self.result
def test_proceed_to():
logger = logging.getLogger("")
conn = TestMySql() ## originally, my code uses MySql() here
conn.connect()
proceed_to(conn,1,'2',3,4,5)
assert conn.get_result()[1] == 4
Please notice that I had to replace MySql() with TestMySql(), so what I've done was to implement my own Mock object manually.
This way it works, but I feel like it's obviously not the best approach here. Why?
Because we're talking about mock objects, the definition of proceed_to is irrelevant here :-) the thing is: I had to implement TestMySql.get_result() and store the data I want in self.result to get the result I want, while MySql itself does not has a get_result() itself!
What I'd like to to is to avoid having to create my own mock object here and use some smarter approach here using unittest.mock
What you are testing is basically what arguments run_update is called with. You can just mock the connection and use assert_called_xxx methods on the mock, and if you want to check specific arguments instead of all arguments you can check call_args on the mock. Here is an example that matches your sample code:
#mock.patch("some_module.MySql")
def test_proceed_to(mocked):
# we need the mocked instance, not the class
sql_mock = mocked.return_value
conn = sql_mock.connect() # conn is now a mock - you also could have created a mock manually
proceed_to(conn, 1, '2', 3, 4, 5)
# assert that the function was called with the correct arguments
conn.run_update.assert_called_once()
# conn.run_update.assert_called_once_with() would compare all arguments
assert conn.run_update.call_args[0][1] == [1, '2', 3, 4, 5]
# call_args[0] are the positional arguments, so this checks the second positional argument
I'm writing tests for a post api, which returns the resource that gets created. But how do I pass this data to a fixture in python so it can cleanup after the test is completed
Cleanup:
#pytest.fixture(scope='function')
def delete_after_post(request):
def cleanup():
// Get ID of resource to cleanup
// Call Delete api with ID to delete the resource
request.addfinalizer(cleanup)
Test:
def test_post(delete_after_post):
Id = post(api)
assert Id
What is the best way to pass the response(ID) back to to the fixture for the cleanup to kick in. Don't want to do the cleanup as part of the test.
You can access that ID using request instance and use anywhere in your code by request.instance.variableName. Like, Suppose your method for deleting id delete(resource_id), here
conftest.py
import pytest
#pytest.fixture(scope='function')
def delete_after_post(request):
def cleanup():
print request.node.resourceId
# Get ID of resource using request.instance.resourceId
# Call Delete api with ID to delete the resource
request.addfinalizer(cleanup)
test file xyz_test.py
def test_post(delete_after_post,request):
request.node.resourceId='3'
I created a fixture that collects cleanup functions for this purpose:
import pytest
#pytest.fixture
def cleaner():
funcs = []
def add_func(func):
funcs.append(func)
yield add_func
for func in funcs:
func()
def test_func(cleaner):
x = 5
cleaner(lambda: print('cleaning', x))
This way you don't need a separate fixture for each use case.
The way I did was create a class called TestRunContext and set static variables to pass around data.
File: test_run_context.py
class TestRunContext:
id_under_test = 0
File: conftest.py
#pytest.fixture(scope='function')
def delete_after_post():
print('hello')
yield
url = 'http://127.0.0.1:5000/api/centres/{0}'.format(TestRunContext.id_under_test)
resp = requests.delete(url)
File: test_post.py
def test_creates_post(delete_after_post):
post_data ={
'name' : 'test',
'address1': 'test',
'city': 'test',
'postcode': 'test',
}
url = 'http://127.0.0.1:5000/api/centres'
data = requests.post(url, post_data)
TestRunContext.id_under_test = data.id
assert data
This works for me for now. But hoping to find a better solution than using ContextManager file. Really dont like this solution.
I am in the process of learning unit testing, however I am struggling to understand how to mock functions for unit testing. I have reviewed many how-to's and examples but the concept is not transferring enough for me to use it on my code. I am hoping getting this to work on a actual code example I have will help.
In this case I am trying to mock isTokenValid.
Here is example code of what I want to mock.
<in library file>
import xmlrpc.client as xmlrpclib
class Library(object):
def function:
#...
AuthURL = 'https://example.com/xmlrpc/Auth'
auth_server = xmlrpclib.ServerProxy(AuthURL)
socket.setdefaulttimeout(20)
try:
if pull == 0:
valid = auth_server.isTokenValid(token)
#...
in my unit test file I have
import library
class Tester(unittest.TestCase):
#patch('library.xmlrpclib.ServerProxy')
def test_xmlrpclib(self, fake_xmlrpclib):
assert 'something'
How would I mock the code listed in 'function'? Token can be any number as a string and valid would be a int(1)
First of all, you can and should mock xmlrpc.client.ServerProxy; your library imports xmlrpc.client as a new name, but it is still the same module object so both xmlrpclib.ServerProxy in your library and xmlrpc.client.ServerProxy lead to the same object.
Next, look at how the object is used, and look for calls, the (..) syntax. Your library uses the server proxy like this:
# a call to create an instance
auth_server = xmlrpclib.ServerProxy(AuthURL)
# on the instance, a call to another method
valid = auth_server.isTokenValid(token)
So there is a chain here, where the mock is called, and the return value is then used to find another attribute that is also called. When mocking, you need to look for that same chain; use the Mock.return_value attribute for this. By default a new mock instance is returned when you call a mock, but you can also set test values.
So to test your code, you'd want to influence what auth_server.isTokenValid(token) returns, and test if your code works correctly. You may also want to assert that the correct URL is passed to the ServerProxy instance.
Create separate tests for different outcomes. Perhaps the token is valid in one case, not valid in another, and you'd want to test both cases:
class Tester(unittest.TestCase):
#patch('xmlrpc.client.ServerProxy')
def test_valid_token(self, mock_serverproxy):
# the ServerProxy(AuthURL) return value
mock_auth_server = mock_serverproxy.return_value
# configure a response for a valid token
mock_auth_server.isTokenValid.return_value = 1
# now run your library code
return_value = library.Library().function()
# and make test assertions
# about the server proxy
mock_serverproxy.assert_called_with('some_url')
# and about the auth_server.isTokenValid call
mock_auth_server.isTokenValid.assert_called_once()
# and if the result of the function is expected
self.assertEqual(return_value, 'expected return value')
#patch('xmlrpc.client.ServerProxy')
def test_invalid_token(self, mock_serverproxy):
# the ServerProxy(AuthURL) return value
mock_auth_server = mock_serverproxy.return_value
# configure a response; now testing for an invalid token instead
mock_auth_server.isTokenValid.return_value = 0
# now run your library code
return_value = library.Library().function()
# and make test assertions
# about the server proxy
mock_serverproxy.assert_called_with('some_url')
# and about the auth_server.isTokenValid call
mock_auth_server.isTokenValid.assert_called_once()
# and if the result of the function is expected
self.assertEqual(return_value, 'expected return value')
There are many mock attributes to use, and you can change your patch decorator usage a little as follows:
class Tester(unittest.TestCase):
def test_xmlrpclib(self):
with patch('library.xmlrpclib.ServerProxy.isTokenValid') as isTokenValid:
self.assertEqual(isTokenValid.call_count, 0)
# your test code calling xmlrpclib
self.assertEqual(isTokenValid.call_count, 1)
token = isTokenValid.call_args[0] # assume this token is valid
self.assertEqual(isTokenValid.return_value, 1)
You can adjust the code above to satisfy your requirements.
I have a test class and a setup function that looks like this:
#pytest.fixture(autouse=True, scope='function')
def setup(self, request):
self.client = MyClass()
first_patcher = patch('myclass.myclass.function_to_patch')
first_mock = first_patcher.start()
first_mock.return_value = 'foo'
value_to_return = getattr(request, 'value_name', None)
second_patcher = patch('myclass.myclass.function_two')
second_mock = second_patcher.start()
second_mock.return_value = value_to_return
#could clean up my mocks here, but don't care right now
I see in the documentation for pytest, that introspection can be done for a module level value:
val = getattr(request.module, 'val_name', None)
But, I want to be able to specify different values to return based on the test I am in. So I am looking for a way to introspect the test_function not the test_module.
http://pytest.org/latest/fixture.html#fixtures-can-introspect-the-requesting-test-context
You can use request.function to get to the test function. Just follow the link on the b wepage you referenced to see what is available on the test request object :)
Maybe the documentation has changed since the time of the accepted answer.
At least for me it was not clear how to
Just follow the link
So I thought I'd update this thread with the link itself:
https://pytest.org/en/6.2.x/reference.html#request
Edit December 2021
Even when the link is correct now I think this statement from the pytest documentation is just not correct:
Fixture functions can accept the request object to introspect the “requesting” test function ...
While I found some examples for getting attributes of the module I did not find a single working example of introspecting the test function that requests the fixture. May be related to collection and runtime order.
What really helped me to get the desired behavior was to use the factory idiom a little further down in the pytest documentation:
https://pytest.org/en/6.2.x/fixture.html#factories-as-fixtures
Set up the fixture factory
#pytest.fixture(scope='function')
def getQueryResult() -> object:
def _impl(_mrId: int = 7622):
return QueryResult(_mrId)
return _impl
Usage
# Concrete value
def test_foo(getQueryResult):
queryResult = getQueryResult(4711)
...
# Default value
def test_bar(getQueryResult):
queryResult = getQueryResult()
...
Is there a way in Python unittest to set the order in which test cases are run?
In my current TestCase class, some testcases have side effects that set conditions for the others to run properly. Now I realize the proper way to do this is to use setUp() to do all setup related things, but I would like to implement a design where each successive test builds slightly more state that the next can use. I find this much more elegant.
class MyTest(TestCase):
def test_setup(self):
# Do something
def test_thing(self):
# Do something that depends on test_setup()
Ideally, I would like the tests to be run in the order they appear in the class. It appears that they run in alphabetical order.
Don't make them independent tests - if you want a monolithic test, write a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def _steps(self):
for name in dir(self): # dir() result is implicitly sorted
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self._steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e))
If the test later starts failing and you want information on all failing steps instead of halting the test case at the first failed step, you can use the subtests feature: https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests
(The subtest feature is available via unittest2 for versions prior to Python 3.4: https://pypi.python.org/pypi/unittest2 )
It's a good practice to always write a monolithic test for such expectations. However, if you are a goofy dude like me, then you could simply write ugly looking methods in alphabetical order so that they are sorted from a to b as mentioned in the Python documentation - unittest — Unit testing framework
Note that the order in which the various test cases will be run is
determined by sorting the test function names with respect to the
built-in ordering for strings
Example
def test_a_first():
print "1"
def test_b_next():
print "2"
def test_c_last():
print "3"
From unittest — Unit testing framework, section Organizing test code:
Note: The order in which the various tests will be run is determined by sorting the test method names with respect to the built-in ordering for strings.
So just make sure test_setup's name has the smallest string value.
Note that you should not rely on this behavior — different test functions are supposed to be independent of the order of execution. See ngcohlan's answer above for a solution if you explicitly need an order.
Another way that I didn't see listed in any related questions: Use a TestSuite.
Another way to accomplish ordering is to add the tests to a unitest.TestSuite. This seems to respect the order in which the tests are added to the suite using suite.addTest(...). To do this:
Create one or more TestCase subclasses,
class FooTestCase(unittest.TestCase):
def test_ten():
print('Testing ten (10)...')
def test_eleven():
print('Testing eleven (11)...')
class BarTestCase(unittest.TestCase):
def test_twelve():
print('Testing twelve (12)...')
def test_nine():
print('Testing nine (09)...')
Create a callable test-suite generation added in your desired order, adapted from the documentation and this question:
def suite():
suite = unittest.TestSuite()
suite.addTest(BarTestCase('test_nine'))
suite.addTest(FooTestCase('test_ten'))
suite.addTest(FooTestCase('test_eleven'))
suite.addTest(BarTestCase('test_twelve'))
return suite
Execute the test-suite, e.g.,
if __name__ == '__main__':
runner = unittest.TextTestRunner(failfast=True)
runner.run(suite())
For context, I had a need for this and wasn't satisfied with the other options. I settled on the above way of doing test ordering.
I didn't see this TestSuite method listed any of the several "unit-test ordering questions" (e.g., this question and others including execution order, or changing order, or tests order).
I ended up with a simple solution that worked for me:
class SequentialTestLoader(unittest.TestLoader):
def getTestCaseNames(self, testCaseClass):
test_names = super().getTestCaseNames(testCaseClass)
testcase_methods = list(testCaseClass.__dict__.keys())
test_names.sort(key=testcase_methods.index)
return test_names
And then
unittest.main(testLoader=utils.SequentialTestLoader())
A simple and flexible way is to assign a comparator function to unittest.TestLoader.sortTestMethodsUsing:
Function to be used to compare method names when sorting them in getTestCaseNames() and all the loadTestsFrom*() methods.
Minimal usage:
import unittest
class Test(unittest.TestCase):
def test_foo(self):
""" test foo """
self.assertEqual(1, 1)
def test_bar(self):
""" test bar """
self.assertEqual(1, 1)
if __name__ == "__main__":
test_order = ["test_foo", "test_bar"] # could be sys.argv
loader = unittest.TestLoader()
loader.sortTestMethodsUsing = lambda x, y: test_order.index(x) - test_order.index(y)
unittest.main(testLoader=loader, verbosity=2)
Output:
test_foo (__main__.Test)
test foo ... ok
test_bar (__main__.Test)
test bar ... ok
Here's a proof of concept for running tests in source code order instead of the default lexical order (output is as above).
import inspect
import unittest
class Test(unittest.TestCase):
def test_foo(self):
""" test foo """
self.assertEqual(1, 1)
def test_bar(self):
""" test bar """
self.assertEqual(1, 1)
if __name__ == "__main__":
test_src = inspect.getsource(Test)
unittest.TestLoader.sortTestMethodsUsing = lambda _, x, y: (
test_src.index(f"def {x}") - test_src.index(f"def {y}")
)
unittest.main(verbosity=2)
I used Python 3.8.0 in this post.
Tests which really depend on each other should be explicitly chained into one test.
Tests which require different levels of setup, could also have their corresponding setUp() running enough setup - various ways thinkable.
Otherwise unittest handles the test classes and test methods inside the test classes in alphabetical order by default (even when loader.sortTestMethodsUsing is None). dir() is used internally which sorts by guarantee.
The latter behavior can be exploited for practicability - e.g. for having the latest-work-tests run first to speed up the edit-testrun-cycle.
But that behavior should not be used to establish real dependencies. Consider that tests can be run individually via command-line options etc.
One approach can be to let those sub tests be not be treated as tests by the unittest module by appending _ in front of them and then building a test case which builds on the right order of these sub-operations executed.
This is better than relying on the sorting order of unittest module as that might change tomorrow and also achieving topological sort on the order will not be very straightforward.
An example of this approach, taken from here (Disclaimer: my own module), is as below.
Here, test case runs independent tests, such as checking for table parameter not set (test_table_not_set) or test for primary key (test_primary_key) still in parallel, but a CRUD test makes sense only if done in right order and state set by previous operations. Hence those tests have been rather made just separate unit, but not test. Another test (test_CRUD) then builds a right order of those operations and tests them.
import os
import sqlite3
import unittest
from sql30 import db
DB_NAME = 'review.db'
class Reviews(db.Model):
TABLE = 'reviews'
PKEY = 'rid'
DB_SCHEMA = {
'db_name': DB_NAME,
'tables': [
{
'name': TABLE,
'fields': {
'rid': 'uuid',
'header': 'text',
'rating': 'int',
'desc': 'text'
},
'primary_key': PKEY
}]
}
VALIDATE_BEFORE_WRITE = True
class ReviewTest(unittest.TestCase):
def setUp(self):
if os.path.exists(DB_NAME):
os.remove(DB_NAME)
def test_table_not_set(self):
"""
Tests for raise of assertion when table is not set.
"""
db = Reviews()
try:
db.read()
except Exception as err:
self.assertIn('No table set for operation', str(err))
def test_primary_key(self):
"""
Ensures, primary key is honored.
"""
db = Reviews()
db.table = 'reviews'
db.write(rid=10, rating=5)
try:
db.write(rid=10, rating=4)
except sqlite3.IntegrityError as err:
self.assertIn('UNIQUE constraint failed', str(err))
def _test_CREATE(self):
db = Reviews()
db.table = 'reviews'
# backward compatibility for 'write' API
db.write(tbl='reviews', rid=1, header='good thing', rating=5)
# New API with 'create'
db.create(tbl='reviews', rid=2, header='good thing', rating=5)
# Backward compatibility for 'write' API, without tbl,
# explicitly passed
db.write(tbl='reviews', rid=3, header='good thing', rating=5)
# New API with 'create', without table name explicitly passed.
db.create(tbl='reviews', rid=4, header='good thing', rating=5)
db.commit() # Save the work.
def _test_READ(self):
db = Reviews()
db.table = 'reviews'
rec1 = db.read(tbl='reviews', rid=1, header='good thing', rating=5)
rec2 = db.read(rid=1, header='good thing')
rec3 = db.read(rid=1)
self.assertEqual(rec1, rec2)
self.assertEqual(rec2, rec3)
recs = db.read() # Read all
self.assertEqual(len(recs), 4)
def _test_UPDATE(self):
db = Reviews()
db.table = 'reviews'
where = {'rid': 2}
db.update(condition=where, header='average item', rating=2)
db.commit()
rec = db.read(rid=2)[0]
self.assertIn('average item', rec)
def _test_DELETE(self):
db = Reviews()
db.table = 'reviews'
db.delete(rid=2)
db.commit()
self.assertFalse(db.read(rid=2))
def test_CRUD(self):
self._test_CREATE()
self._test_READ()
self._test_UPDATE()
self._test_DELETE()
def tearDown(self):
os.remove(DB_NAME)
you can start with:
test_order = ['base']
def index_of(item, list):
try:
return list.index(item)
except:
return len(list) + 1
2nd define the order function:
def order_methods(x, y):
x_rank = index_of(x[5:100], test_order)
y_rank = index_of(y[5:100], test_order)
return (x_rank > y_rank) - (x_rank < y_rank)
3rd set it in the class:
class ClassTests(unittest.TestCase):
unittest.TestLoader.sortTestMethodsUsing = staticmethod(order_methods)
ncoghlan's answer was exactly what I was looking for when I came to this question. I ended up modifying it to allow each step-test to run, even if a previous step had already thrown an error; this helps me (and maybe you!) to discover and plan for the propagation of error in multi-threaded database-centric software.
class Monolithic(TestCase):
def step1_testName1(self):
...
def step2_testName2(self):
...
def steps(self):
'''
Generates the step methods from their parent object
'''
for name in sorted(dir(self)):
if name.startswith('step'):
yield name, getattr(self, name)
def test_steps(self):
'''
Run the individual steps associated with this test
'''
# Create a flag that determines whether to raise an error at
# the end of the test
failed = False
# An empty string that the will accumulate error messages for
# each failing step
fail_message = ''
for name, step in self.steps():
try:
step()
except Exception as e:
# A step has failed, the test should continue through
# the remaining steps, but eventually fail
failed = True
# Get the name of the method -- so the fail message is
# nicer to read :)
name = name.split('_')[1]
# Append this step's exception to the fail message
fail_message += "\n\nFAIL: {}\n {} failed ({}: {})".format(name,
step,
type(e),
e)
# Check if any of the steps failed
if failed is True:
# Fail the test with the accumulated exception message
self.fail(fail_message)
I also wanted to specify a particular order of execution to my tests. The main differences to other answers in here are:
I wanted to perverse a more verbose test
method name without replacing whole name with step1, step2 etc.
I also wanted the printed method execution in the console to have some granularity apposed to using a Monolithic solution in some of the other answers.
So for the execution for monolithic test method is looked like this:
test_booking (__main__.TestBooking) ... ok
I wanted:
test_create_booking__step1 (__main__.TestBooking) ... ok
test_process_booking__step2 (__main__.TestBooking) ... ok
test_delete_booking__step3 (__main__.TestBooking) ... ok
How to achieve this
I provided a suffix to my method name with the __step<order> for example (order of definition is not important):
def test_create_booking__step1(self):
[...]
def test_delete_booking__step3(self):
[...]
def test_process_booking__step2(self):
[...]
For the test suite override the __iter__ function which will build an iterator for the test methods.
class BookingTestSuite(unittest.TestSuite):
""" Extends the functionality of the the standard test suites """
def __iter__(self):
for suite in self._tests:
suite._tests = sorted(
[x for x in suite._tests if hasattr(x, '_testMethodName')],
key = lambda x: int(x._testMethodName.split("step")[1])
)
return iter(self._tests)
This will sort test methods into order and execute them accordingly.