simplest way of parameterizing tests in python? - python

I have a library with a bunch of different objects that have similar expected behavior, thus I want to run similar tests on them, but not necessarily identical tests on them.
To be specific lets say I have some sorting functions, and a test for checking if a sorting function actually sorts. Some sorting functions are intended to work on some inputs, but not others.
I'd like to write something close to the code below. However, nose won't do a good job of telling me exactly where the tests failed and with what inputs. If check_sort fails for sort2 on case2, I won't be able to see that.
def check_sort(sortalg, vals):
assert sortalg(vals) == sort(vals)
def test_sorts():
case1 = [1,3,2]
case2 = [2,31,1]
check_sort(sort1, case1)
for c in [case1, case2]:
check_sort(sort2, c)
I would like to be able to simply add a decorator to check_sort to tell nose that it's a nested test. Something like
#nested_test
def check_sort(sortalg, vals):
assert sortalg(vals) == sort(vals)
The idea being that when it gets called, it registers itself with nose and will report its inputs if it fails.
It looks like pytest provides pytest.mark.parameterized, but that seems rather awkward to me. I have to put all my arguments above the function in one spot, so I can't call it repeatedly in my tests. I also don't think this supports nesting more than one level.
Nose also provides test generators, which seems closer, but still not as clear as I would hope.

Using the provided generative test feature is the likely intended way to do it with nose and py.test.
That said, you can dynamically add functions (tests) to a class after it has been created. That is the technique used by the Lib/test/test_decimal.py code in the standard library.

You can use nose-ittr, its a nose extension for supporting parametrized testing.
example:
#ittr(case1=[1,3,2],case2=[2,31,1])
def test_sorts():
check_sort(sort1, self.case1)
check_sort(sort2, self.case2)

Related

Where is the order of parameters (naming of them?) in a pytest function documented?

This seems like a very basic question, but I looked at https://docs.pytest.org/en/6.2.x/reference.html, which I think is the reference for pytest and couldn't find the answer.
So I wanted to combine a pytest.fixture (setup/teardown using yield) with a pytest.mark.parametrize ... and I realised that they both have to be included as parameters in the test function.
A simple experiment showed that it doesn't appear to matter what order they are listed in the list of parameters, so my working assumption currently is that it never matters, and that, also, there are no (optional) named parameters in these methods.
It'd be nice to know if this is actually documented somewhere, and if I've got this right.
This is not explicitly stated, as far as I can see, but follows implicitly from the documentation:
When pytest goes to run a test, it looks at the parameters in that test function’s signature, and then searches for fixtures that have the same names as those parameters. Once pytest finds them, it runs those fixtures, captures what they returned (if anything), and passes those objects into the test function as arguments.
(side note: I generally recommend to check this documentation - it has been reworked recently and is very comprehensive IMO)
Fixtures are always looked up by name, so the order they appear in the argument list should not matter in principle.
This is also true for the arguments given in pytest.mark.parametrize, as observed.
Generally, the execution order of independent fixtures shall not matter, and if it does, it is either a bug, or the fixtures are not compatible.
There is one caveat: if you use fixtures together with positional parameters introduced by the unittest.mock.patch or unittest.mock.patch.object decorators (which are supported by pytest), the fixture arguments shall always be passed last:
from unittest.mock import patch
#patch("module.class")
def test_correct(mocked_class, capsys):
...
#patch("module.class")
def test_incorrect(capsys, mocked_class):
...
# fails because "mocked_class" is seen as an unknown fixture
This is not surprising, given how positional arguments work.
A way to avoid this is to use pytest-mock, which provides the mocker fixture:
def test_correct(mocker, capsys):
mocked_class = mocker.patch("module.class")
...
def test_correct2(capsys, mocker):
mocked_class = mocker.patch("module.class")
...

Assert String Upper is Called

How do I write a test to validate if a string manipulation was called or not? In this specific situation I'm trying to test that upper was called at least once. Since this is a python built in method it's a little different and I can't wrap my head around it.
# my function that returns an uppercase string example
def my_upper(str_to_upper: str) -> str:
return str(str_to_upper).upper()
# my test that should determine that .upper() was called
def test_my_upper():
# i assume I need some kind of mock here?
my_upper('a')
assert upper.call_count == 1
Update: I need to know if the core implementation of a very large product has changed. If I implement string manipulation and another dev comes in and changes how it works I want tests to immediately let me know so I can verify the implementation they added works or not.
Another update: here's what I've tried. It's complaining it can't find library 'str'.
from mock import patch
#patch("str.upper")
def test_my_upper(mock_upper):
my_upper('a')
assert mock_upper.call_count == 1
Answer for the main question: you can use pytest spy
Reference: https://pypi.org/project/pytest-mock/
def test_spy_method(mocker):
spy = mocker.spy(str, 'upper')
spy.assert_has_called_once(21) # more advanced method could be used such as the ones with count
Answer the Update: It depends entirely of the implemented tests, if you have a plenty of ones which tests each of one the behaviors from it you MIGHT get a change. But if the change do something that keeps the tests results maybe tests will continue to work. Nitpick, tests isn't for this kind of matter, but to check the good behavior of a piece of software Thus, if the code changes and regression tests continue work as expected within the threshold coverage that shouldn't be a problem.
Answer for the another Update: Indeed, str isn't an imported object, at least not that you are used to be. And for what I've understood from the question you want to know about calls from a given method from str, this use cases fits perfectly by spy. Another point, is that you don't need the create a wrapper for a method to get something from test, actually the code should "live" a part from tests.

How to properly unit test code inside of a Python function

How can I unit test variables/values that are inside a function?
This is a rather basic question about unit testing (I am using pytest) that is I want to make sure all my code behaves as expected.
There must be a proper way to do that, but I didn't find out. As of yet, I try to split up my code in as many functions as possible to get as many return values as possible that I can test. But I am not able to test inside those functions.
Here I can only test if the return value interp_function is working as expected, but in no way to test the rest of the code.
def multidim_interp_function(grid = None, max_error_in_mm=1,
max_error_in_deg=1):
def interp_function(array_of_pts):
fct_list = create_interp(grid)
max_error = [max_error_in_mm]*3
if len(grid) == 6:
max_error.extend([max_error_in_deg]*3)
return np.column_stack([error*fct_list[i](array_of_pts) for i,error in enumerate(max_error)])
return interp_function
You don't need to test the implementation of your function. If you wrote foo = 'bar' inside your function, then you don't need to test whether foo correctly has been assigned the value 'bar'; you can just expect that to work. With unit tests you want to be one step more abstract. You want to check whether your function multidim_interp_function returns the correct results given some known input. Treat the function like you'd treat other functions in Python: you wouldn't write a unit test to figure out how max() works internally, instead you'd write a test asserting that max(3, 4) returns the value 4.
Not only is it impractical to test the "internals" of a function, it's that the internals of a function may change. If you're writing your unit tests and you figure out you have some bug in your code, then you're going to change your code. Or you may later come back and refactor the internals to make them more efficient, or to deduplicate some code in your module, or whatever. You wouldn't want to rewrite your unit tests each time. So you shouldn't write your unit tests to be too specific. Your unit tests should test the public interface of the function (arguments and return values), so you can assure that this interface doesn't change (i.e. the function continues to behave the same) even if you move some code around. If through your unit test or otherwise you figure out that there’s some misbehavior inside the function, then you can step in with a debugger and confirm each single statement one by one to find where an issue is.
To get into the right mindset, try Test-Driven Development, in which you write your tests firsts, essentially deciding on how the function should behave, and then you implement the function internals, all the while being able to test that your implementation conforms to the expectation.
Basically, you want your function to behave the same and so, results should be predicted. You don't want to test internal state of your function, but rather the output.
def my_calculus(divise, multiply):
def inner(value):
return value * multiply / divide
return inner
calculus = my_calculus(2, 4)
#pytest.mark.parameterize("function,value,expected", [
(calculus, 2, 4),
(calculus, 5, 10)
])
def test_my_calculus(function, value, expected):
assert function(value) == excepted

Does it make sense to write unit tests that are just testing if functions are called?

I have the following code:
def task_completed(task):
_update_status(task, TaskStatus.COMPLETED)
successful_task(task)
def task_pending(task):
_update_status(task, TaskStatus.PENDING)
successful_task(task)
def task_canceled(task):
_update_status(task, TaskStatus.CANCELED)
process_task(task)
def successful_task(task):
process_task(task)
send_notification(task)
def process_task(task):
assign_user(task)
notify_user(task)
cleanup(task)
def _update_status(task, status):
task.status = status
task.save(update_fields=['status'])
I have written the following tests:
def test_task_completed(mocker, task):
mock_successful_task = mocker.patch('services.successful_task')
task_completed(task)
assert task.status == TaskStatus.COMPLETED
mock_successful_task.called_once_with(task)
def test_task_pending(mocker, task):
mock_successful_task = mocker.patch('services.successful_task')
task_pending(task)
assert task.status == TaskStatus.PENDING
mock_successful_task.called_once_with(task)
def test_task_canceled(mocker, task):
mock_process_task = mocker.patch('services.process_task')
task_pending(task)
assert task.status == TaskStatus.CANCELED
mock_process_task.called_once_with(task)
def test_successful_task(mocker, task):
mock_process_task = mocker.patch('services.process_task')
mock_send_notification = mocker.patch('notifications.send_notification')
mock_process_task.called_once_with(task)
mock_send_notification.called_once_with(task)
def test_process_task(mocker, task):
mock_assign_user = mocker.patch('users.assign_user')
mock_notify_user = mocker.patch('notifications.notify_user')
mock_cleanup = mocker.patch('utils.cleanup')
mock_assign_user.called_once_with(task)
mock_notify_user.called_once_with(task)
mock_cleanup.called_once_with(task)
As you can see some tests like test_successful_task and test_process_task are just testing if specific functions are called.
But does it make sense to write a test for this or do I understand something wrong and my unit tests are just bad? I don't know another solution how I should test these functions.
In my experience, tests like these are very brittle because they depend on implementation details. A unit test should only be concerned with the results of the method being tested. Ideally, this means asserting against the return value. If there are side effects, you can assert those instead. But then you should probably look at those side effects and find a different solution that doesn't require them.
I would say no, they're not useful.
Unit tests are supposed to test functionality, here's some input, I call this, here's my result, does it match what I expect? Things needs to be clear and verifiable.
When you have a test which verifies a method has been called, what do you really have?
Pure uncertainty. Ok, a thing has been called, but how is that useful? You are not verifying a result, the method you're calling can do a million things and you have no idea what it does.
Code calling a method is an implementation detail and your unit tests are not supposed to have that kind of knowledge.
Why do we write unit tests?
- to check functionality
- to help refactoring
If you need to change your tests every time your code changes then you haven't really accomplished one of the main reasons for unit testing.
If your code changes and that method is not called anymore, then what?
You now have to go and change the test? Change it to what though? If your next move is to remove the test then why did you have in the first place?
What if someone else has to deal with this issue, 6 months down the road? There is no documentation to check and see why is there a test checking a method has been called?
Bottom line, a test like this has zero value and all it does is introduce uncertainty.
White box tests can be useful to detect some regression or to assert that a specific action is made.
For example you can verify that you don't interact with your DB in this particular case or that you've correctly called the notification service.
However, the drawback is that you're likely to change the test when you change the code, because you're test is very tied to the implementation.
This can be painful when you are refactoring, because you also need to refactor the test. You could forget an assertion or a step and create a false positive test with a regression.
I would use it only if it makes sens and if you need it to assert what's going on in details.
You can search on the web TDD: London vs Detroit.
You'll find interesting stuff.
This is not exactly the purpose of unit tests, though it does have uses. Unit tests aim to improve the quality of the code by testing functionality and results-- it would be more beneficial to write unit tests to test the functionality of each method called.
With that being said, if you have one function that calls 4 other functions and you want to check if they actually get executed in your main block of code, then this makes sense. But you should definitely be writing unit tests for your submethods as well.
Yes, it makes sense. However, I would look at unittest.mock.Mock.assert_called_with
In my experience, yes.
When you design a test, you know that you have to deal with 4 elements
pre-conditions (context)
inputs
outputs
post-conditions (side effects)
We all agree that it would be more easy to test and code if the beahviour of the function under test depended only from inputs and outputs, but in some case this cannot happen, especially when your logic deals with I/O and/or its goal is to issue a mutation of the application state. This means that your test have to be aware of post-conditions. But what can make us sure that a post-condition is met?
pick this method
public class UserService
{
public void addUser(User toAdd)
}
this method adds a user on the database; in a more elegant way, we can say that adds the User to a collection, which is abstracted by the repository semantic. So the side effect of the method is that a userRepository.save(User user) is called. You can mock this method and expect that has been called once with given argument, or make the test fail
Obiviously this can be achivied only if the method has been mocked, so the test is not influenced from the behaviour of a unit that is not under test.
I recognize that the drawback is to make the test brittle, but at the same time
in tdd, it makes fail tests that don't call the mocked function, so the test states "hey, the addUser relies upon the UserRepository.save()!" if you're into this style
The test will be broken if the dependent function interface changes, but we don't want to change interface often, am I right?
before to add dependencies to your method you will think twice, this is an hint to write cleaner code

Proper structure for many test cases in Python with unittest

I am looking into the unittest package, and I'm not sure of the proper way to structure my test cases when writing a lot of them for the same method. Say I have a fact function which calculates the factorial of a number; would this testing file be OK?
import unittest
class functions_tester(unittest.TestCase):
def test_fact_1(self):
self.assertEqual(1, fact(1))
def test_fact_2(self):
self.assertEqual(2, fact(2))
def test_fact_3(self):
self.assertEqual(6, fact(3))
def test_fact_4(self):
self.assertEqual(24, fact(4))
def test_fact_5(self):
self.assertFalse(1==fact(5))
def test_fact_6(self):
self.assertRaises(RuntimeError, fact, -1)
#fact(-1)
if __name__ == "__main__":
unittest.main()
It seems sloppy to have so many test methods for one method. I'd like to just have one testing method and put a ton of basic test cases (ie 4! ==24, 3!==6, 5!==120, and so on), but unittest doesn't let you do that.
What is the best way to structure a testing file in this scenario?
Thanks in advance for the help.
You can put the asserts in a loop:
def test_fact(self):
tests = [(1,1), (2,2), (3,6), (4,24), (5,120)]
for n,f in tests:
self.assertEqual(fact(n), f)
I'd say your ways of doing it is generally fine (but read on).
You could, as interjay suggested, do a loop (and, incidentally, it only counts as one test because the unittest module counts the number of functions, not the number of asserts). But I assume you won't exhaustively try to test every number, or even all numbers within a very large interval. So looping won't save you much, and, especially in testing, you should aim at being explicit.
Having said that, you should test for a small number of subsequent numbers (say, 1 through 5), and then try to get a feel for where possible corner cases and failures points are. Say, test for 10, 100, 1000 (that is, change order of magnitude), negative numbers, zero etc.
BTW, watch out for your two last tests. The first of them doesn't mean much. fact(5) is different than a LOT of numbers (infinite numbers, actually). Test for the correct case, testing for the incorrect ones isn't productive.
def test_fact_5(self):
self.assertFalse(1==fact(5))
The second one is badly named: "test_fact_6" makes me think you're testing fact(6). You should name it as something like "test_fact_minus_one", or at least "test_fact_negative_number".
def test_fact_6(self):
self.assertRaises(RuntimeError, fact, -1)
Test naming is very important, both when you're debugging errors and when you refer back to the tests as documentation.
You've said you're looking into unittest, but consider using nose tests instead - they allow you to generate independant test cases programmatically.
How to generate dynamic (parametrized) unit tests in python? (answer)

Categories

Resources