Using mock.patch + parametrize in a Pytest Class Function - python

I have been working on fastAPI and have some async methods to generate an auth token
Writting the unit testing I'm getting the following error:
TypeError: test_get_auth_token() missing 2 required positional arguments: 'test_input' and 'expected_result'
my unit test looks like:
class TestGenerateAuthToken(IsolatedAsyncioTestCase):
"""
"""
#pytest.mark.parametrize(
"test_input,expected_result",
[("user", "user_token"), ("admin", "admin_token")],
)
#mock.patch("myaauth.get_token", new_callable=AsyncMock)
async def test_get_auth_token(self, get_token_mock, test_input, expected_result):
"""
Test get_auth_header
"""
def mock_generate_user_token(_type):
return f"{_type}_token"
get_token_mock.side_effect = mock_generate_user_token
assert await myaauth.get_token(test_input) == expected_result
I know is as simple as to just remove the parametrize, but I wanna know if is possible to do so

It is not related to mock.
The reason is that pytest.mark.parametrize is not compatible with unittest.IsolatedAsyncioTestCase.
Instead, you could try using pytest's plugin, for example, pytest-asyncio, to let pytest work with the coroutine test function.
from unittest import mock
from unittest.mock import AsyncMock
import pytest as pytest
import myaauth
class TestGenerateAuthToken:
#pytest.mark.parametrize(
"test_input,expected_result",
[("user", "user_token"), ("admin", "admin_token")],
)
#pytest.mark.asyncio
#mock.patch("myaauth.get_token", new_callable=AsyncMock)
async def test_get_auth_token(self, get_token_mock, test_input, expected_result):
"""
Test get_auth_header
"""
def mock_generate_user_token(_type):
return f"{_type}_token"
get_token_mock.side_effect = mock_generate_user_token
assert await myaauth.get_token(test_input) == expected_result
-> % python -m pytest pytest_coroutine.py
=================================================================================================== test session starts ===================================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/james/PycharmProjects/stackoverflow
plugins: asyncio-0.20.2
asyncio: mode=strict
collected 2 items
pytest_coroutine.py .. [100%]
==================================================================================================== 2 passed in 0.03s ====================================================================================================

Related

How to execute Python functions using Hypothesis' composite strategy?

I am trying to execute a function decorated with Hypothesis' #strategy.composite decorator.
I know I can test functions using the #given decorator, such as -
from hypothesis import given
from hypothesis import strategies as st
#given(st.integers(min_value=1))
def test_bar(x):
assert x > 0
with pytest using - pytest <filename.py>.
But in the case of a function with the #strategy.composite decorator like -
from hypothesis import strategies as st
from hypothesis import given
import pytest
#st.composite
def test_foo(draw):
arg1 = draw(st.integers(min_value=1))
arg2 = draw(st.lists(st.integers(), min_size=arg1, max_size=arg1))
print(arg1, " ", arg2)
assert(len(arg2) == arg1)
I am unable to execute the tests in a similar way.
When using pytest I am unable to execute the tests (using python to execute the python file does nothing) -
[reik#reik-msi tests]$ pytest testhypo.py
==================================== test session starts ====================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /home/reik/tests
plugins: hypothesis-5.16.0, lazy-fixture-0.6.3
collected 1 item
testhypo.py F [100%]
========================================= FAILURES ==========================================
_________________________________________ test_foo __________________________________________
item = <Function test_foo>
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
if not hasattr(item, "obj"):
yield
elif not is_hypothesis_test(item.obj):
# If #given was not applied, check whether other hypothesis
# decorators were applied, and raise an error if they were.
if getattr(item.obj, "is_hypothesis_strategy_function", False):
> raise InvalidArgument(
"%s is a function that returns a Hypothesis strategy, but pytest "
"has collected it as a test function. This is useless as the "
"function body will never be executed. To define a test "
"function, use #given instead of #composite." % (item.nodeid,)
)
E hypothesis.errors.InvalidArgument: testhypo.py::test_foo is a function that returns a Hypothesis strategy, but pytest has collected it as a test function. This is useless as the function body will never be executed. To define a test function, use #given instead of #composite.
/usr/lib/python3.8/site-packages/hypothesis/extra/pytestplugin.py:132: InvalidArgument
================================== short test summary info ==================================
FAILED testhypo.py::test_foo - hypothesis.errors.InvalidArgument: testhypo.py::test_foo is...
===================================== 1 failed in 0.06s =====================================
I tried adding the function call test_foo() but I got the same error.
Then I tried adding #given above the function and got a different error -
========================================== ERRORS ===========================================
________________________________ ERROR at setup of test_foo _________________________________
file /usr/lib/python3.8/site-packages/hypothesis/core.py, line 903
def run_test_as_given(test):
E fixture 'test' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/usr/lib/python3.8/site-packages/hypothesis/core.py:903
================================== short test summary info ==================================
ERROR testhypo.py::test_foo
if I do #given() - note the extra braces, instead, I get another error -
========================================= FAILURES ==========================================
_________________________________________ test_foo __________________________________________
arguments = (), kwargs = {}
def wrapped_test(*arguments, **kwargs):
> raise InvalidArgument(message)
E hypothesis.errors.InvalidArgument: given must be called with at least one argument
/usr/lib/python3.8/site-packages/hypothesis/core.py:234: InvalidArgument
================================== short test summary info ==================================
FAILED testhypo.py::test_foo - hypothesis.errors.InvalidArgument: given must be called wit...
I tried wrapping the code inside another function -
from hypothesis import strategies as st
from hypothesis import given
import pytest
def demo():
#st.composite
def test_foo(draw):
arg1 = draw(st.integers(min_value=1))
arg2 = draw(st.lists(st.integers(), min_size=arg1, max_size=arg1))
print(arg1, " ", arg2)
assert(len(arg2) == arg1)
but that did not work either -
[reik#reik-msi tests]$ python testhypo.py
[reik#reik-msi tests]$ pytest testhypo.py
==================================== test session starts ====================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /home/reik/tests
plugins: hypothesis-5.16.0, lazy-fixture-0.6.3
collected 0 items
=================================== no tests ran in 0.00s ===================================
(I also tried putting the demo function call at the end of the file, but that did not change the testing behaviour in any way)
The Hypothesis quick start guide says that calling the function would work, but it clearly does not. (To be fair, the documentation does not specify how to run tests with #composite)
How do I test the functions that are decorated with #strategy.composite? I do not have to use pytest - I would prefer not having to use it actually, but it seemed the easiest way to test the functions (which were decorated with #given) so I decided to go that route.
#st.composite is a helper function for defining custom strategies, not tests.
What you're trying to do can be accomplished by using #given and st.data():
#given(st.data())
def test_foo(data):
x = data.draw(st.integers())
...
https://hillelwayne.com/post/property-testing-complex-inputs/ gives a good overview of how these techniques are used.
I had the same issue. Here is a minimal and complete working example of using #composite.
#dataclass
class Container():
id: int
value: str
#st.composite
def generate_containers(draw):
_id = draw(st.integers())
_value = draw(st.characters())
return Container(id=_id, value=_value)
#given(generate_containers())
def test_container(container):
assert isinstance(container, Container)
assert isinstance(container.id, int)
assert isinstance(container.value, str)

Mark test as passed from inside a fixture in pytest

With pytest.skip or pytest.xfail, I can mark a test as skipped or xfailed from inside the fixture. There is no pytest.pass, though. How can I mark it as passed?
import pytest
#pytest.fixture
def fixture():
#pytest.skip()
pytest.xfail()
def test(fixture):
assert False
Unfortunately i don't know the way to pass tests with fixtures, but you can use pytest.skip in your test with msg for example "pass" and the hook from conftest.py will check this "pass" msg and make test passed:
conftest.py
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(call):
outcome = yield
if outcome.get_result().outcome == 'skipped' and call.excinfo.value.msg == 'pass':
outcome.get_result().outcome = 'passed'
test_skip.py
# -*- coding: utf-8 -*-
import pytest
def test_skip_pass():
pytest.skip('pass')
def test_skip_pass_2():
pytest.skip('skip')
result:
collecting ... collected 2 items
test_skip.py::test_skip_pass PASSED [ 50%]
test_skip.py::test_skip_pass_2 SKIPPED [100%]
Skipped: skip
======================== 1 passed, 1 skipped in 0.04s =========================

How can I access the overall test result of a pytest test run during runtime?

Dependent on the overall test result of a pytest test run I would like to execute conditional tear down. This means the access to the overall test result must happen after all tests have been executed but before the test runner has been left. How can I achieve this?
I could not find a suitable pytest hook to access the overall test result yet.
You don't need one; just collect the test results yourself. This is the blueprint I usually use when in need of accessing the test results in batch:
# conftest.py
import pytest
def pytest_sessionstart(session):
session.results = dict()
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == 'call':
item.session.results[item] = result
Now all test results are stored under session.results dict; example usage:
# conftest.py (continued)
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
passed_amount = sum(1 for result in session.results.values() if result.passed)
failed_amount = sum(1 for result in session.results.values() if result.failed)
print(f'there are {passed_amount} passed and {failed_amount} failed tests')
Running the tests will yield:
$ pytest -sv
================================== test session starts ====================================
platform darwin -- Python 3.6.4, pytest-3.7.1, py-1.5.3, pluggy-0.7.1 -- /Users/hoefling/.virtualenvs/stackoverflow/bin/python3.6
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow/so-51711988, inifile:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_fail FAILED
run status code: 1
there are 2 passed and 1 failed tests
======================================== FAILURES =========================================
_______________________________________ test_fail _________________________________________
def test_fail():
> assert False
E assert False
test_spam.py:10: AssertionError
=========================== 1 failed, 2 passed in 0.05 seconds ============================
In case the overall pytest exit code (exitstatus) is sufficient info (info about # passed, # failed, etc. not required) use the following:
# conftest.py
def pytest_sessionfinish(session, exitstatus):
print()
print('run status code:', exitstatus)
Accessing the error
You can access the error details from the call.excinfo object:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.outcome == "failed":
exception = call.excinfo.value
exception_class = call.excinfo.type
exception_class_name = call.excinfo.typename
exception_type_and_message_formatted = call.excinfo.exconly()
exception_traceback = call.excinfo.traceback

pytest fixture of fixture, not found

Based on this stackoverflow: pytest fixture of fixtures
I have the following code in the same file:
#pytest.fixture
def form_data():
return { ... }
#pytest.fixture
def example_event(form_data):
return {... 'data': form_data, ... }
But when I run pytest, it complains that fixture 'form_data' not found
I am new to pytest so I am not even sure if this is possible?
Yes, it is possible.
If you have the test and all the fixtures in 1 file:
test.py
import pytest
#pytest.fixture
def foo():
return "foo"
#pytest.fixture
def bar(foo):
return foo, "bar"
def test_foo_bar(bar):
expected = ("foo", "bar")
assert bar == expected
and run pytest test.py then Success!!!
======================================= test session starts ========================================
platform darwin -- Python 3.6.8, pytest-4.3.0
collected 1 item
test.py . [100%]
===================================== 1 passed in 0.02 seconds =====================================
But if you put the fixtures in a different file: test_foo_bar.py
from test import bar
def test_foo_bar(bar):
expected = ("foo", "bar")
assert bar == expected
and run pytest test_foo_bar.py expecting (like I did) that importing only the bar fixture is enough since on importing it would already have executed the foo fixture then you get the error you are getting.
======================================= test session starts ========================================
platform darwin -- Python 3.6.8, pytest-4.3.0
collected 1 item
test2.py E [100%]
============================================== ERRORS ==============================================
__________________________________ ERROR at setup of test_foo_bar __________________________________
file .../test_foo_bar.py, line 3
def test_foo_bar(bar):
.../test.py, line 7
#pytest.fixture
def bar(foo):
E fixture 'foo' not found
> available fixtures: TIMEOUT, bar, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, doctest_namespace, monkeypatch, no_cover, once_without_docker, pytestconfig, record_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
.../test.py:7
===================================== 1 error in 0.03 seconds ======================================
To fix this also import the foo fixture in the test_foo_bar.py module.

Reusing code for Unit Tests where method changes

I know the title is very poor but I cannot think of a more eloquent way of stating the issue. I have a lot of unit tests:
def test_send_new_registration_email(self):
emails = NewEmail(email_client=MagicMock())
emails.send_email = MagicMock()
emails.send_marketing_email(recipients(), new_registration_payload())
emails.send_email.assert_called_with(new_registration_output())
and
def test_send_new_comment_email(self):
emails = NewEmail(email_client=MagicMock())
emails.send_email = MagicMock()
emails.send_marketing_email(recipients(), new_registration_payload())
emails.send_email.assert_called_with(new_comment_output())
There are twenty of these unit tests. All follow very similar patterns. Basically I compare the input to desired output. There must be a way to have a list of inputs and a list of outputs and compare.
I could do a for loop e.g.
def test_send_new_registration_email(self):
for index, input in enum(inputs):
emails = NewEmail(email_client=MagicMock())
emails.send_email = MagicMock()
emails.send_marketing_email(input)
emails.send_email.assert_called_with(output[index])
However is there a cleaner way to do so?
You are looking for parameterized tests. However, the actual implementation depends on what library you are using for unit testing. The vanilla unittest does not provide any support for parameterizing, so you will need to install third-party packages. An example with parameterized (pip install parameterized):
from parameterized import parameterized
#parameterized.expand([
((recipients(), new_registration_payload(), ), new_registration_output(), ),
((recipients(), new_registration_payload(), ), new_comment_output(), ),
])
def test_send_new_comment_email(self, input, output):
emails = NewEmail(email_client=MagicMock())
emails.send_email = MagicMock()
emails.send_marketing_email(*input)
emails.send_email.assert_called_with(output)
The test will now be executed twice with both the test inputs provided.
If you intend to write and run your tests with pytest instead (this is what I'm using myself), it already offers parameterizing of tests out of the box:
import pytest
data = [
((recipients(), new_registration_payload(), ), new_registration_output(), ),
((recipients(), new_registration_payload(), ), new_comment_output(), ),
]
#pytest.mark.parametrize("input, output", data)
def test_send_new_comment_email(input, output):
emails = NewEmail(email_client=MagicMock())
emails.send_email = MagicMock()
emails.send_marketing_email(*input)
emails.send_email.assert_called_with(output)
The test will be run twice:
$ pytest test_foo.py --collect-only
======== test session starts ========
platform darwin -- Python 3.6.3, pytest-3.2.5, py-1.5.2, pluggy-0.4.0
rootdir: /private/tmp, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 2 items
<Module 'test_foo.py'>
<Function 'test_send_new_comment_email[input0-registration_output]'>
<Function 'test_send_new_comment_email[input1-comment_output]'>
======== no tests ran in 0.01 seconds ========

Categories

Resources