Why not use python's assert statement in tests, these days? - python

In Python testing, why would you use assert methods:
self.assertEqual(response.status_code, 200)
self.assertIn('key', my_dict)
self.assertIsNotNone(thing)
As opposed to the direct assertions:
assert response.status_code == 200
assert 'key' in my_dict
assert thing is not None
According to the docs:
These methods are used instead of the assert statement so the test runner can accumulate all test results and produce a report
However this seems to be bogus, a test runner can accumulate results and produce a report regardless. In a related post unutbu has shown that unittest will raise an AssertionError just the same as the assert statement will, and that was over 7 years ago so it's not a shiny new feature either.
With a modern test runner such as pytest, the failure messages generated by the assertion helper methods aren't any more readable (arguably the camelCase style of unittest is less readable). So, why not just use assert statements in your tests? What are the perceived disadvantages and why haven't important projects such as CPython moved away from unittest yet?

The key difference between using assert keyword or dedicated methods is the output report. Note that the statement following assert is always True or False and can't contain any extra information.
assert 3 == 4
will simply show an AssertionError in the report.
However,
self.assertTrue(3 == 4)
Gives some extra info: AssertionError: False is not true. Not very helpful, but consider:
self.assertEqual(3, 4)
It's much better as it tells you that AssertionError: 3 != 4. You read the report and you know what kind of assertion it was (equality test) and values involved.
Suppose you have some function, and want to assert value it returns.
You can do it in two ways:
# assert statement
assert your_function_to_test() == expected_result
# unittest style
self.assertEqual(your_function_to_test(), expected_result)
In case of error, the first one gives you no information besides the assertion error, the second one tells you what is the type of assertion (equality test) and what values are involved (value returned and expected).
For small projects I never bother with unittest style as it's longer to type, but in big projects you may want to know more about the error.

I'm not entirely sure I understand the question. The title is "Why not use pythons assert statement in tests these days".
As you've noted, in fact you can use plain assertions if you use a test-framework like pytest. However pytest does something quite special to get this to work. It re-writes the plain assertions in the test-code before it runs the tests.
See https://docs.pytest.org/en/stable/writing_plugins.html#assertion-rewriting which states:
One of the main features of pytest is the use of plain assert statements and the detailed introspection of expressions upon assertion failures. This is provided by “assertion rewriting” which modifies the parsed AST before it gets compiled to bytecode.
The unittest framework does not implement this extra complexity. (And it is extra complexity. Pytest re-writes only the assertions in the test cases, it will not re-write the assertions in the other python library your test-code uses. So you will sometimes find pytest hits an assertion error in your test-code, but there's no detail about why the assertion has failed, because it hasn't re-written that bit of your code. And thus you only get a plain AssertionError without any information as to why it failed.)
Instead, unittest provides methods like assertEqual so that it can:
Know it's a test assertion that has failed rather than some other unhandled/unexpected exception; and
It can provide information as to why the assertion is not satisfied. (A plain assertion in python does nothing but raise AssertionError. It does not say, for example AssertionError because 1 != 2)
Pytest does number 1 and 2 by re-writing the Abstract Syntax Tree before it runs the test-code. Unittest takes the more traditional approach of asking the developer to use particular methods.
So essentially the answer is: it's an implementation difference between the test-frameworks. Put another way, Python's in-built assert statement provides no debugging information about why the failure occurred. So, if you want some more information, you need to decide how you're going to implement it.
Unittest is a lot simpler than pytest. Pytest is great but it is also a lot more complicated.

I think with the current pytest versions you can use assert in most cases, as the context is reconstructed by the test framework (pytest 2.9.2):
def test_requests(self):
...
> self.assertEqual(cm.exception.response.status_code, 200)
E AssertionError: 404 != 200
looks similar to
def test_requests(self):
...
> assert cm.exception.response.status_code == 200
E AssertionError: assert 404 == 200
E -404
E +200
In theory, using self.assertXxx() methods would allow pytest to count the number of assertions that did not fail, but AFAIK there is no such metric.

The link to the docs that you found is the correct answer. If you do not like this style of writing tests I would highly suggest using pytest:
http://pytest.org/latest/
pytest has done a bunch of work that allows you to use the assert statement the way you want to. It also has a bunch of other really nice features such as their fixtures.

Related

How to assert a method has been called from another complex method in Python?

I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.

Can I run unittest / pytest with python Optimization on?

I just added a few assert statements to the constructor of a class.
This has had the immediate effect of making about 10 tests fail.
Rather than fiddle with those tests I'd just like pytest to run the application code (not the test code obviously) with Python's Optimization switched on (-O switch, which means the asserts are all ignored). But looking at the docs and searching I can't find a way to do this.
I'm slightly wondering whether this might be bad practice, as arguably the time to see whether asserts fail may be during testing.
On the other hand, another thought is that you might have certain tests (integration tests, etc.) which should be run without optimisation, so that the asserts take effect, and other tests where you are being less scrupulous about the objects you are creating, where it might be justifiable to ignore the asserts.
asserts obviously qualify as "part of testing"... I'd like to add more to some of my constructors and other methods, typically to check parameters, but without making hundreds of tests fail, or have to become much more complicated.
The best way in this case would be to move all assert statements inside your test code. Maybe even switch to https://pytest.org/ as it is already using assert for test evaluation.
I'm assuming you can't in fact do this.
Florin and chepner have both made me wonder whether and to what extent this is desirable. But one can imagine various ways of simulating something like this, for example a Verifier class:
class ProjectFile():
def __init__(self, project, file_path, project_file_dict=None):
self.file_path = file_path
self.project_file_dict = project_file_dict
if __debug__:
Verifier.check(self, inspect.stack()[0][3]) # gives name of method we're in
class Verifier():
#staticmethod
def check(object, method, *args, **kwargs):
print(f'object {object} method {method}')
if type(object) == ProjectFile:
project_file = object
if method == '__init__':
# run some real-world checks, etc.:
assert project_file.file_path.is_file()
assert project_file.file_path.suffix.lower() == '.docx'
assert isinstance(project_file.file_path, pathlib.Path)
if project_file.project_file_dict != None:
assert isinstance(project_file.project_file_dict, dict)
Then you can patch out the Verifier.check method easily enough in the testing code:
def do_nothing(*args, **kwargs):
pass
verifier_class.Verifier.check = do_nothing
... so you don't even have to clutter your methods up with another fixture or whatever. Obviously you can do this on a module-by-module basis so, as I said, some modules might choose not to do this (integration tests, etc.)

#pytest individual test decorator to continue assert even after fail?

To stop the testing process after the first (N) failures:
pytest --maxfail=2
what I want is
#pytest.some_decorator
def test_one(data):
... #do something
assert feature1...
assert feature2...
assert feature3...
This #pytest.some_decorator should run all assert statements even after the first one fails. It cannot be parameterized as it's a feature of the data.
Is this possible? really doesn't make sense to write multiple tests for tests like this for me.
If you use the normal assert then you're basically wanting to go back inside a function after the function has exited.
You can do that with fuckitpy. See https://github.com/ajalt/fuckitpy#as-a-decorator Maybe there's a way to modify it so it only continues after AssertionErrors and not just any exception.
Split tests with a single assert per test. Use #pytest.mark.xfail() decorator to mark tests you are expecting to fail.

What is the correct order for actual and expected in pytest?

This question gives the order assertEqual(expected, actual), albeit for the unittest package.
But Pycharm, with pytest, prints out "Expected:..." and "Actual..." based on the order actual==expected.
This is confusing. What is the correct ordering for pytest? The source code and online documentation do not say.
(I note also that JUnit and TestNG disagree on this.)
JUnit
assertEquals(expected, actual)
Pytest
assert actual == expected
For example:
def test_actual_expected():
expected = 4
actual = 2+1
assert actual == expected
Will fail with message
Other's comments have implied this may be more an issue with the way PyCharm displays the message than pytest itself - ie. this message may not exist outside of PyCharm... I dunno.
BDFL doesn't like actual/expected terminology and the docs were specifically changed to address this.
If your tooling is expecting arguments in a certain order, then I suppose the most correct thing to do would be to consistently do what works for your tooling.

Writing python tests like Qunitjs

I'm trying to find a similar approach to Qunit's assertions in Python. When using assertions in Qunit, the message parameter is used in a very descriptive fashion.
test( "test", function() {
ok( fn([])==None, "Function should return 0 if no users" );
ok( fn(["Test User"])==1, "Function should return 1 is users supplied" );
});
Python's unittest module on the other hand, uses the message parameter is a somewhat more negative context. These are only shown when an assertion fails.
class TestSequenceFunctions(unittest.TestCase):
def test_choice(self):
seq = range(10)
element = random.choice(seq)
self.assertTrue(element in seq, msg="Element not found in sequence")
The end result of the Qunit is that there is much clearer transcript which could be compared against a spec document.
I realise that in Python, a similar approach would be achieved by perhaps say writing
def test_choice_ensure_element_exists_in_sequence(self):
It's not the same though. The output isn't presented in a nice way, and test lifecycle then performs setup and teardown for each label you want to use, which isn't necessarily what you want.
There might be a library out there which takes this approach, so perhaps this issue is already solved. Neither the python unittest library or pytest appear to work in this fashion though.
Your problem could be simply that don't know the unittest libary well enough yet. I find being able to write
self.assertIn('s', (1,3,4))
To be very short, expressive and readable.
And if you use the correct assertion method on the testcase then you rarely need to add your own message. assertIn has a perfectly reasonable output all by itself:
AssertionError: 's' not found in (1, 3, 4)
So rather than writing heaps of comments/message code. I rely on well named assertions combined with helpful default messages. If a well named assertion and helpful error message has not already been provided then I extend the test case and add my own.
self.assert_user_is_administrator(user)
Is very readable and will have a nice message if it fails that I provided in only one location.

Categories

Resources