We use py.test in a project and use fixtures for most test cases. But I see no possibility to use fixtures in doctest files.
To give an example with some code snippets: I have a browser fixture in conftest.py like:
#fixture
def browser(request):
from wsgi_intercept import zope_testbrowser
browser = zope_testbrowser.WSGI_Browser()
[...]
return browser
and use it in the file test_browser.txt like:
>>> browser.open('some_url')
>>> browser.url == 'some_url'
True
But I can't see a way to get the fixture into a doctest file. Is this possible at all with py.test?
It isn't supported at the moment. pytest would need to know at collection time which fixtures are going to be used in a doctest. If we can come up with a way to declare which fixtures are going to be used, it shouldn't be hard to add support to _pytest/doctest.py Maybe it's also possible to automatically find out which fixtures a doctest needs, not sure.
There's a pull request attempting to implement this over at the pytest repository. It provides two globals to doctest files, i.e. fixture_request and get_fixture (which is a convenience shortcut for fixture_request.getfuncargvalue). The intended use is:
>>> browser = get_fixture('browser')
>>> browser.open('some_url')
This is different from the .. pytest-fixtures: ... line as suggested by Holger above, but was easier to implement... :) Needless to say, it's up for discussion, of course!
Related
I understand that pytest fixtures raise an error when calling a fixture directly from within a test. But I don't fully understand why. For context, I am a junior dev new to python so I may be missing something obvious that needs explaining.
I have a fixture as follows:
#pytest.fixture()
def get_fixture_data_for(sub_directory: str, file_name: str) -> json:
file_path = Path(__file__).parent / f"{sub_directory}"
with open(file_path / f"{file_name" as j:
data = json.load(j)
return data
and then a test that says something like
def test_file_is_valid():
data = get_fixture_data_for('some_subdir', 'some_filename.json')
#do something with this information
...
I have many different test files that will use this fixture function to read data from the files and then use the data in posting to an endpoint for integration tests.
When running this, I get the error that fixtures are not meant to be called directly, and should be created automatically when test functions request them as parameters. But why?
I see in the docs this is mentioned: https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly but I don't understand why this functionality was deprecated.
Answering my own question for any newbies that come across this in future. As Michael alluded to in the comment above - what I am trying to do is use a helper function as a fixture.
Fixtures are loaded and run once (depending on the scope you provide it) when the test suite is run. They are for setting up the test. For example, if you had a dict that needed populating and loading, that would then be passed into tests; this would be best handled by a fixture.
However, if you wanted to manipulate some data generated within the test, you would use a helper function - as this is not something used to set up the test.
I have a few pytest fixtures I use from third-party libraries and sometimes their names are overly long and cumbersome. Is there a way to create a short alias for them?
For example: the django_assert_max_num_queries fixture from pytest-django. I would like to call this max_queries in my tests.
You cannot just add an alias in the form of
max_queries = django_assert_max_num_queries
because fixtures are looked up by name at run-time and not imported (and even if they can be imported in some cases, this is not recommended).
But you can always write your own fixture that just yields another fixture:
#pytest.fixture
def max_queries(django_assert_max_num_queries):
yield django_assert_max_num_queries
Done this way, max_queries will behave exactly the same as django_assert_max_num_queries.
Note that you should use yield and not return, to make sure that the control is returned to the fixture.
I need to create automated tests for several related apps and faced one problem with test data management between tests.
The problem is that same data must be shared between several apps and/or different APIs.
Now I have next structure with pytest that working good for me but I doubt that use test data managment in conftest.py is correct approach:
Overall structure looks like:
tests/
conftest.py
app1/
conftest.py
test_1.py
test_2.py
app2/
conftest.py
test_1.py
test_2.py
test_data/
test_data_shared.py
test_data_app1.py
test_data_app2.py
Here is example of test data in tests/conftest.py:
from test_data.test_data_shared import test_data_generator, default_data
#pytest.fixture
def random_email():
email = test_data_generator.generate_random_email()
yield email
delete_user_by_email(email)
#pytest.fixture()
def sign_up_api_test_data(environment, random_email):
"""
environment is also fixture, capture value from pytest options
"""
data = {"email": random_email, "other_data": default_data.get_required_data(), "hash": test_data_generator.generate_hash(environment)}
yield data
do_some_things_with_data(data)
Its very comfort to use fixture for that purposes, because postconditions, scopes and other sweet things (note that apps have a lot of logic and relationship, so I cannot simply hardcode data or migrate it in json file for example)
Similar things can be found in tests/app1/conftest.py and tests/app2/conftest.py for data that used in app1 and app 2 accordingly.
So, here is 2 problems:
1. conftest.py become a monster with a lot of code
2. as I know, using conftest for test data is a bad approach or I'm wrong?
Thanks in advance!
I use conftest.py for test data.
Fixtures are a recommended way to provide test data to tests.
conftest.py is the recommended way to share fixtures among multiple test files.
So as for #2. I think it's fine to use conftest.py for test data.
Now for #1, "conftest.py becoming too big".
Especially for the top level conftest.py file, at test/conftest.py, you can move that content into one or more pytest plugin. Since conftest.py files can be thought of as "local plugins", the process for transforming them into plugins is not too difficult.
See https://docs.pytest.org/en/latest/writing_plugins.html
You might be interested in looking at pytest-cases: it was actually designed to address this question. You will find plenty of examples in the doc, and cases can sit in dedicated modules, in classes, or in the test files - it really depends on your needs. For example putting two kind of test data generators in the same module:
from pytest_cases import parametrize_with_cases, parametrize
def data_a():
return 'a'
#parametrize("hello", [True, False])
def data_b(hello):
return "hello" if hello else "world"
def user_bob():
return "bob"
#parametrize_with_cases("data", cases='.', prefix="data_")
#parametrize_with_cases("user", cases='.', prefix="user_")
def test_with_data(data, user):
assert data in ('a', "hello", "world")
assert user == 'bob'
See documentation for details. I'm the author by the way ;)
I haven't been able to found a good explanation of this in the net, I'm guessing that i'm missing something trivial but I haven't been able to find it, so I came here to ask the experts :)
I have a test were I need to patch a constructor call, reading the docs as I understand, something like this should work, but:
import unittest.mock as mocker
import some_module
mock_class1 = mocker.patch('some_module.some_class')
print(mock_class1 is some_module.some_class) # Returns False
print(mock_class1) # <unittest.mock._patch>
mock_instance1 = mock_class1.return_value # _patch object has no attr return_value
Instead I get a different output if I do this
with mocker.patch('some_module.some_class') as mock_class2:
print(mock_class2 is some_module.some_class) # Returns True
print(mock_class2) # <MagicMock name=...>
mock_instance2 = mock_class2.return_value # No problem
print(mock_instance2) # <NonCallableMagicMock name=...>
Now, for the test itself, i'm using pytest-mock module which gives a mocker fixture that behaves like the first code block.
I would like to know:
why the behavior differs depending on the way that one call the mock framework
is there a clean way to trigger the behavior of the second code block without the with clause?
1) pytest mocker plugin is being developer to avoid use of context managers; and probably not everybody fond of the way the standard mock plays with functions parameter 2) not really. It is intended to be used either as content manager or function decorator.
I think it is possible use mocker package without pytest
References
https://github.com/pytest-dev/pytest-mock
https://www.packtpub.com/mapt/book/application_development/9781847198846/5/ch05lvl1sec45/integrating-with-python-mocker
What about installing pytest-mock and creating a test like this
import itertools
def test1(mocker):
mock_class1 = mocker.patch('itertools.count')
print(mock_class1 is itertools.count)
print(mock_class1)
mock_instance1 = mock_class1.return_value # Magic staff...
or may be using monkeypatching? Just do not use the standard unittest.mock with pytest
I'm writing a package that uses a YAML config parser in multiple contexts.
I need to test the parser with py.test, and I'm writing a class for each context where the parser sub-package gets applied.
So I need to load a YAML file for each class, and have it available to every test in that class.
Is my example below a good approach or is there something else I should be doing?
import pytest
import yaml
import my_package
class context_one:
#pytest.fixture
def parse_context(self):
return my_package.parse.context # module within parser for certain context
#pytest.fixture
def test_yaml_context(self):
with open('test_yaml.yml') as yaml_file:
return yaml.load(yaml_file)
def test_validation_function1(self,parse_context,test_yaml_context):
test_yaml = test_yaml_context['validation_function1']
# test that missing key raises error
with pytest.raises(KeyError):
parse_context.validation_function1(test_yaml['missing_key_case'])
# test that invalid value raises error
with pytest.raises(ValueError):
parse_context.validation_function1(test_yaml['invalid_value_case'])
It works. I thought I'd ask because I don't find much in the py.test docs, even though I feel that something along these lines would be sort of a common use case.
Specifically:
not sure why I need to have the fixtures
if I load the YAML at the test module level, the tests can't find it--this is just the way py.test works?
should I just import the my_package.parse_context at the test module level?
You don't need them. I'd define setup_module() to read and parse test_yaml.yml once for all tests.
No. Strange problem. If I would debug it I'd log the current directory. Or simply open the file related to the test file: open(os.path.join(os.path.dirname(__file__), 'test_yaml.yml')).
Yes, why not?