Execution order of python unitests by their declaration - python

I'm using python unittests and selenium and in my code I have one test class with many testcases:
class BasicRegression(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Chrome(executable_path=Data.driver)
cls.driver.implicitly_wait(1)
cls.driver.maximize_window()
def testcase1_some_stuff(self):
do_something()
def testcase2_some_stuff(self):
do_something()
def testcase3_some_stuff(self):
do_something()
...
#classmethod
def tearDownClass(cls):
cls.driver.close()
cls.driver.quit()
if __name__ == '__main__':
unittest.main()
The tests are performed alphabetically, i.e. testcase1, testcase2 and testcase3, up to testcase9. Standard. The problem appears with testcase10 and so on, which is executed first.
My question is how can I set their order of execution?

To start with unit tests are supposed to be independent. So must be python-unittest. Tests executed through python-unittest should be designed in such a way that they should be able to be run independently. Pure unit tests offer a benefit that when they fail, they often depicts what exactly went wrong. Still we tend to write functional tests, integration tests, and system tests with the unittest framework and these tests won't be feasible to run without ordering them since Selenium automates the Browsing Context. To achieve the ordering, you at-least need to use a better naming convention for the testnames, as an example: test_1, test_2, test_3, etc and this works because the tests are sorted respect to the built-in ordering for strings.
However, as per your observation the problem appears with test_10 and so on where sorting order seems to break. As an example, among 3 tests with name as test_1, test_2 and test_10, it seems unittest executes test_10 before test_2:
Code:
import unittest
class Test(unittest.TestCase):
#classmethod
def setUp(self):
print("I'm in setUp")
def test_1(self):
print("I'm in test 1")
def test_2(self):
print("I'm in test 2")
def test_10(self):
print("I'm in test 10")
#classmethod
def tearDown(self):
print("I'm in tearDown")
if __name__ == "__main__":
unittest.main()
Console Output:
Finding files... done.
Importing test modules ... done.
I'm in setUp
I'm in test 1
I'm in tearDown
I'm in setUp
I'm in test 10
I'm in tearDown
I'm in setUp
I'm in test 2
I'm in tearDown
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
Solution
Different solutions were offered in different discussions and some of them are as follows:
#max in the discussion Unittest tests order suggested to set the sortTestMethodsUsing to None as follows:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
#atomocopter in the discussion changing order of unit tests in Python suggested to set the sortTestMethodsUsing to some value as follows:
import unittest
unittest.TestLoader.sortTestMethodsUsing = lambda _, x, y: cmp(y, x)
#ElmarZander in the discussion Unittest tests order suggested to use nose and write your testcases as functions (and not as methods of some TestCase derived class) nose doesn't fiddle with the order, but uses the order of the functions as defined in the file.
#Keiji in the discussion Controlling the order of unittest.TestCases mentions:
sortTestMethodsUsing expects a function like Python 2's cmp, which
has no equivalent in Python 3 (I went to check if Python 3 had a <=>
spaceship operator yet, but apparently not; they expect you to rely on
separate comparisons for < and ==, which seems much a backwards
step...). The function takes two arguments to compare, and must return
a negative number if the first is smaller. Notably in this particular
case, the function may assume that the arguments are never equal, as
unittest will not put duplicates in its list of test names.
With this in mind, here's the simplest way I found to do it, assuming
you only use one TestCase class:
def make_orderer():
order = {}
def ordered(f):
order[f.__name__] = len(order)
return f
def compare(a, b):
return [1, -1][order[a] < order[b]]
return ordered, compare
ordered, compare = make_orderer()
unittest.defaultTestLoader.sortTestMethodsUsing = compare
Then, annotate each test method with #ordered:
class TestMyClass(unittest.TestCase):
#ordered
def test_run_me_first(self):
pass
#ordered
def test_do_this_second(self):
pass
#ordered
def test_the_final_bits(self):
pass
if __name__ == '__main__':
unittest.main()
This relies on Python calling annotations in the order the annotated
functions appear in the file. As far as I know, this is intended, and
I'd be surprised if it changed, but I don't actually know if it's
guaranteed behavior. I think this solution will even work in Python 2
as well, for those who are unfortunately stuck with it, though I
haven't had a chance to test this.
If you have multiple TestCase classes, you'll need to run ordered,
compare = make_orderer() once per class before the class
definition, though how this can be used with sortTestMethodsUsing
will be more tricky and I haven't yet been able to test this either.
For the record, the code I am testing does not rely on the test
order being fixed - and I fully understand that you shouldn't rely on
test order, and this is the reason people use to avoid answering this
question. The order of my tests could be randomised and it'd work just
as well. However, there is one very good reason I'd like the order to
be fixed to the order they're defined in the file: it makes it so much
easier to see at a glance which tests failed.

Related

QA - testing order is not right [duplicate]

How can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
You can disable it by setting sortTestMethodsUsing to None:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
For pure unit tests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state?
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unit test" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class), 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file.
In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from NumPy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent.
They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later.
If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
I half agree with the idea that tests shouldn't be ordered. In some cases it helps (it's easier, damn it!) to have them in order... after all, that's the reason for the 'unit' in UnitTest.
That said, one alternative is to use mock objects to mock out and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more information, check out Mock, which is part of the standard library now.
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, and then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
#classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
#classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
#classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
Why do you need a specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the database setup code or create testing data ad nauseam.)
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Don't rely on the order. If they use some common state, like the filesystem or database, then you should create setUp and tearDown methods that get your environment into a testable state, and then clean up after the tests have run.
Each test should assume that the environment is as defined in setUp, and should make no further assumptions.
You should try the proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is truly awesome.
For example, if test case #1 from module A should depend on test case #3 from module B you CAN set this behaviour using the library.
Here is a simpler method that has the following advantages:
No need to create a custom TestCase class.
No need to decorate every test method.
Use the unittest standard load test protocol. See the Python docs here.
The idea is to go through all the test cases of the test suites given to the test loader protocol and create a new suite but with the tests ordered by their line number.
Here is the code:
import unittest
def load_ordered_tests(loader, standard_tests, pattern):
"""
Test loader that keeps the tests in the order they were declared in the class.
"""
ordered_cases = []
for test_suite in standard_tests:
ordered = []
for test_case in test_suite:
test_case_type = type(test_case)
method_name = test_case._testMethodName
testMethod = getattr(test_case, method_name)
line = testMethod.__code__.co_firstlineno
ordered.append( (line, test_case_type, method_name) )
ordered.sort()
for line, case_type, name in ordered:
ordered_cases.append(case_type(name))
return unittest.TestSuite(ordered_cases)
You can put this in a module named order_tests and then in each unittest Python file, declare the test loader like this:
from order_tests import load_ordered_tests
# This orders the tests to be run in the order they were declared.
# It uses the unittest load_tests protocol.
load_tests = load_ordered_tests
Note: the often suggested technique of setting the test sorter to None no longer works because Python now sorts the output of dir() and unittest uses dir() to find tests. So even though you have no sorting method, they still get sorted by Python itself!
From unittest — Unit testing framework
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set the order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out this Stack Overflow question for details.
There are scenarios where the order can be important and where setUp and Teardown come in as too limited. There's only one setUp and tearDown method, which is logical, but you can only put so much information in them until it gets unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
It might not be the best solution, but I often use one method that kicks off the actual tests:
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and verify" function that constructs the object and validates its assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
See the example of WidgetTestCase on Organizing test code. It says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
I have implemented a plugin, nosedep, for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments, this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs. hours).
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
It seems they are executed in alphabetical order by test name (using the comparison function between strings).
Since tests in a module are also only executed if they begin with "test", I put in a number to order the tests:
class LoginTest(unittest.TestCase):
def setUp(self):
driver.get("http://localhost:2200")
def tearDown(self):
# self.driver.close()
pass
def test1_check_at_right_page(self):
...
assert "Valor" in driver.page_source
def test2_login_a_manager(self):
...
submit_button.click()
assert "Home" in driver.title
def test3_is_manager(self):
...
Note that numbers are not necessarily alphabetical - "9" > "10" in the Python shell is True for instance. Consider using decimal strings with fixed 0 padding (this will avoid the aforementioned problem) such as "000", "001", ... "010"... "099", "100", ... "999".
Contrary to what was said here:
tests have to run in isolation (order must not matter for that)
and
ordering them is important because they describe what the system do and how the developer implements it.
In other words, each test brings you information of the system and the developer logic.
So if this information is not ordered it can make your code difficult to understand.
To randomise the order of test methods you can monkey patch the unittest.TestLoader.sortTestMethodsUsing attribute
if __name__ == '__main__':
import random
unittest.TestLoader.sortTestMethodsUsing = lambda self, a, b: random.choice([1, 0, -1])
unittest.main()
The same approach can be used to enforce whatever order you need.

How to assert a method has been called from another complex method in Python?

I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.

Can I run unittest / pytest with python Optimization on?

I just added a few assert statements to the constructor of a class.
This has had the immediate effect of making about 10 tests fail.
Rather than fiddle with those tests I'd just like pytest to run the application code (not the test code obviously) with Python's Optimization switched on (-O switch, which means the asserts are all ignored). But looking at the docs and searching I can't find a way to do this.
I'm slightly wondering whether this might be bad practice, as arguably the time to see whether asserts fail may be during testing.
On the other hand, another thought is that you might have certain tests (integration tests, etc.) which should be run without optimisation, so that the asserts take effect, and other tests where you are being less scrupulous about the objects you are creating, where it might be justifiable to ignore the asserts.
asserts obviously qualify as "part of testing"... I'd like to add more to some of my constructors and other methods, typically to check parameters, but without making hundreds of tests fail, or have to become much more complicated.
The best way in this case would be to move all assert statements inside your test code. Maybe even switch to https://pytest.org/ as it is already using assert for test evaluation.
I'm assuming you can't in fact do this.
Florin and chepner have both made me wonder whether and to what extent this is desirable. But one can imagine various ways of simulating something like this, for example a Verifier class:
class ProjectFile():
def __init__(self, project, file_path, project_file_dict=None):
self.file_path = file_path
self.project_file_dict = project_file_dict
if __debug__:
Verifier.check(self, inspect.stack()[0][3]) # gives name of method we're in
class Verifier():
#staticmethod
def check(object, method, *args, **kwargs):
print(f'object {object} method {method}')
if type(object) == ProjectFile:
project_file = object
if method == '__init__':
# run some real-world checks, etc.:
assert project_file.file_path.is_file()
assert project_file.file_path.suffix.lower() == '.docx'
assert isinstance(project_file.file_path, pathlib.Path)
if project_file.project_file_dict != None:
assert isinstance(project_file.project_file_dict, dict)
Then you can patch out the Verifier.check method easily enough in the testing code:
def do_nothing(*args, **kwargs):
pass
verifier_class.Verifier.check = do_nothing
... so you don't even have to clutter your methods up with another fixture or whatever. Obviously you can do this on a module-by-module basis so, as I said, some modules might choose not to do this (integration tests, etc.)

Unittest check if main called methods

I have a main method that looks like this:
class Example:
...
def main(self):
self.one()
self.two(list)
self.three(self.four(4))
How to check if the calling main it calls the following methods inside it?
I have tried:
def setUp(self):
self.ex = example.Example()
def test_two(self):
# testing method two that has only list.append(int) and returns list
mock_obj = Mock()
self.ex.two(mock_obj, 1)
self.assertEqual(call.append(1),mock_obj.method_calls[0]) # works fine
mock_obj.method.called # throws False ...why?
def test_main(self):
with patch('example.Example') as a:
a.main()
print(a.mock_calls) # [call.main()]
...
def test_main(self):
mock_obj = Mock()
self.ex.main(mock_obj) # throws TypeError: main() takes exactly 1 argument (2 given)
print(mock_obj.method_calls) # expected one, two, three and four method calls
Realy need any help to be honest..
Using Python 2.6.6 with unittest and mock modules
With unit-testing you could in principle test if these four functions would actually be called. And, you could certainly do this by mocking all of them. However, you would need integration tests later anyway to be sure that the functions were called in the proper order, with the arguments in the proper order, arguments having values in the form expected by the callee, return values being in the expected form etc.
You can check all these things in unit-testing - but this has not much value, because if you have wrong assumptions about one of these points, your wrong assumptions will go into both your code and your unit-tests. That is, the unit-tests will test exactly against your wrong assumptions and will pass. Finding out about your wrong assumptions requires an integration test where the real caller and callee are brought together.
Summarized: Your main method is interaction dominated and thus should rather be tested directly by interaction-testing (aka integration-testing) rather than by unit-testing plus subsequent interaction-testing.

How do I disable a test using pytest?

Let's say I have a bunch of tests:
def test_func_one():
...
def test_func_two():
...
def test_func_three():
...
Is there a decorator or something similar that I could add to the functions to prevent pytest from running just that test? The result might look something like...
#pytest.disable()
def test_func_one():
...
def test_func_two():
...
def test_func_three():
...
Pytest has the skip and skipif decorators, similar to the Python unittest module (which uses skip and skipIf), which can be found in the documentation here.
Examples from the link can be found here:
#pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...
import sys
#pytest.mark.skipif(sys.version_info < (3,3),
reason="requires python3.3")
def test_function():
...
The first example always skips the test, the second example allows you to conditionally skip tests (great when tests depend on the platform, executable version, or optional libraries.
For example, if I want to check if someone has the library pandas installed for a test.
import sys
try:
import pandas as pd
except ImportError:
pass
#pytest.mark.skipif('pandas' not in sys.modules,
reason="requires the Pandas library")
def test_pandas_function():
...
The skip decorator would do the job:
#pytest.mark.skip(reason="no way of currently testing this")
def test_func_one():
# ...
(reason argument is optional, but it is always a good idea to specify why a test is skipped).
There is also skipif() that allows to disable a test if some specific condition is met.
These decorators can be applied to methods, functions or classes.
To skip all tests in a module, define a global pytestmark variable:
# test_module.py
pytestmark = pytest.mark.skipif(...)
You can mark a test with the skip and skipif decorators when you want to skip a test in pytest.
Skipping a test
#pytest.mark.skip(reason="no way of currently testing this")
def test_func_one():
...
The simplest way to skip a test is to mark it with the skip decorator which may be passed an optional reason.
It is also possible to skip imperatively during test execution or setup by calling the pytest.skip(reason) function. This is useful when it is not possible to evaluate the skip condition during import time.
def test_func_one():
if not valid_config():
pytest.skip("unsupported configuration")
Skipping a test based on a condition
#pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
def test_func_one():
...
If you want to skip based on a conditional then you can use skipif instead. In the previous example, the test function is skipped when run on an interpreter earlier than Python3.6.
Finally, if you want to skip a test because you are sure it is failing, you might also consider using the xfail marker to indicate that you expect a test to fail.
I'm not sure if it's deprecated, but you can also use the pytest.skip function inside of a test:
def test_valid_counting_number():
number = random.randint(1,5)
if number == 5:
pytest.skip('Five is right out')
assert number <= 3
If you want to skip the test but not hard code a marker, better use keyword expression to escape it.
pytest test/test_script.py -k 'not test_func_one'
Note: Here 'keyword expression' is basically, expressing something using keywords provided by pytest (or python) and getting something done. I above example, 'not' is a keyword.
For more info, refer this link.
More examples of keyword expression:Refer this answer
You may also want to run the test even if you suspect that test will fail. For such scenario https://docs.pytest.org/en/latest/skipping.html suggests to use decorator #pytest.mark.xfail
#pytest.mark.xfail
def test_function():
...
In this case, Pytest will still run your test and let you know if it passes or not, but won't complain and break the build.
You can divide your tests on set of test cases by custom pytest markers, and execute only those test cases what you want. Or the inverse, running all tests except the another set:
#pytest.mark.my_unit_test
def test_that_unit():
...
#pytest.mark.my_functional_test
def test_that_function():
...
And then, to run only one set of unit tests, as example:
pytest -m my_unit_test
Inverse, if you want to run all tests, except one set:
pytest -m "not my_unit_test"
How to combine several marks
More examples in official documentation
It looks more convenient if you have good logic separation of test cases.

Categories

Resources