Basically, I'm trying to do a test for each iteration of a list of routes to check that the web pages associated with a particular function return a valid status code.
I want something along the lines of this:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.yield_fixture
def app():
# App context and specific overrides for test env
yield get_app()
#pytest.yield_fixture
def client(app):
yield app.test_client()
#pytest.yield_fixture
def routes(app):
routes = [
'foo',
'bar',
# There's quite a lot of function names here
]
with app.app_context():
for n, route in enumerate(routes):
routes[n] = url_for(route)
# yield url_for(route) #NOTE: This would be ideal, but not allowed.
# convert the routes from func names to actual routes
yield routes
#pytest.mark.parametrize('route', routes)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
I read up that you can't mix parametrize with a fixture as an argument due to something along the lines of interpretation/load/execution order, although, how is this solved in terms of 'best practice'?
I saw a solution where you can generate tests from a function directly, and that seems extremely flexible and might be along the lines of what I want Passing pytest fixture in parametrize (Although I can't use call a fixture decorated function directly, so probably not)
Although, I'm new to pytest and I'd love to see more examples of how to generate tests or perform multiple tests in an iteration with little-to-no restrictions while adhering to proper pytest styling and the DRY principle. (I know about conftest.py)
I'd prioritize versatility/practicality over proper styling if that matters. (within reason, maintainability is a high priority too)
I want to be able to reference the solution to this problem to help guide how I tackle structuring my tests in the future, but I seem to keep hitting roadblocks/limitations or being told by pytest I can't do X solution the way I would expect/want too.
Relevant Posts:
DRY: pytest: parameterize fixtures in a DRY way
Generate tests from a function: Passing pytest fixture in parametrize
Very simply solution (doesn't apply to this case): Parametrize pytest fixture
Flask app context in PyTest: Testing code that requires a Flask app or request context
Avoiding edge-cases with multiple list fixtures: Why does Pytest perform a nested loop over fixture parameters
Pytest fixtures themselves can be parameterized, though not with pytest.mark.parametrize. (It looks like this type of question was also answered here.) So:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.fixture
def app():
app = get_app()
# app context stuff trimmed out here
return app
#pytest.fixture
def client(app):
client = app.test_client()
return client
#pytest.fixture(params=[
'foo',
'bar'
])
def route(request, app):
'''GET method urls that we want to perform mass testing on'''
with app.app_context():
return url_for(request.param)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
The documentation explains it this way:
Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture. Test functions usually do not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways.
Extending the previous example, we can flag the fixture to create two smtp_connection fixture instances which will cause all tests using the fixture to run twice. The fixture function gets access to each parameter through the special request object:
# content of conftest.py
import pytest import smtplib
#pytest.fixture(scope="module", params=["smtp.gmail.com", "mail.python.org"])
def smtp_connection(request):
smtp_connection = smtplib.SMTP(request.param, 587, timeout=5)
yield smtp_connection
print("finalizing {}".format(smtp_connection))
smtp_connection.close()
The main change is the declaration of params with #pytest.fixture, a list of values for each of which the fixture function will execute and can access a value via request.param. No test function code needs to change.
My current solution that I fumbled my way across is this:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.fixture
def app():
app = get_app()
# app context stuff trimmed out here
return app
#pytest.fixture
def client(app):
client = app.test_client()
return client
def routes(app):
'''GET method urls that we want to perform mass testing on'''
routes = ['foo', 'bar']
with app.app_context():
for n, route in enumerate(routes):
routes[n] = url_for(route)
return routes
#pytest.mark.parametrize('route', routes(get_app()))
#NOTE: It'd be really nice if I could use routes as a
# fixture and pytest would handle this for me. I feel like I'm
# breaking the rules here doing it this way. (But I don't think I actually am)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
My biggest issue with this solution is that, I can't call the fixture directly as a function, and this solution requires either that, or doing all the work my fixture does outside of the fixture, which is not ideal. I want to be able to reference this solution to tackle how I structure my tests in the future.
FOR ANYONE LOOKING TO COPY MY SOLUTION FOR FLASK SPECIFICALLY:
My current solution might be worse for some people than it is for me, I use a singleton structure for my get_app() so it should be fine if get_app() is called many times in my case, because it will call create_app() and store the app itself as a global variable if the global variable isn't already defined, basically emulating the behavior of only calling create_app() once.
Related
I manage my configuration for a Python application with https://github.com/theskumar/python-dotenv and I use pytest for my tests.
For a particular set of tests, I want custom configuration specific to each test. Now I know of https://github.com/quiqua/pytest-dotenv, which gives me the ability to set a config per environment (prod/test), but I want finer granularity on a per-test basis. So far, I've handled this by mocking the Config object that contains all of my configuration. This is messy because for each test, I need to mock this Config object for each module where it's loaded.
Ideally, I'd have something like this:
def test_1(config):
config.foo = 'bar'
run_code_that_uses_config()
def test_2(config):
config.foo = 'bleh'
run_code_that_uses_config()
I ended up using monkeypatch and going with this approach:
from my_library.settings import Config
#pytest.fixture
def config(monkeypatch):
monkeypatch.setattr(Config, 'foo', 'bar')
return Config
def test_1(config, monkeypatch):
monkeypatch.setattr(config, 'foo', 'some value specific to this test')
run_code_that_uses_config()
With monkeypatch I was able to pass around the Config object via a pytest fixture to create a default and override it for each test. The code that uses the config was able to pick up the change. Previously, I'd used patch from the standard mock library, but I had to patch every single place where the config was looked up (https://docs.python.org/3/library/unittest.mock.html#where-to-patch), which made maintenance difficult.
I had similar situation, however I set as a config variable and use the current_app to modify at the test, for eg.
import pytest
from flask import current_app
#pytest.mark.usefixtures(...)
def test_my_method(client, session)
# Initially, it was False, but I modified here as True
current_app.config['MY_CONFIG_VARIABLE'] = True
...
# Then, do your test here
Hope it helps!
For each application in the project, you need to write tests. Also for each application you first need to upload your test data, which, after passing all the module tests, must be deleted.
I found several solutions, but none of them seems to me optimal
First:
in file conftest.py in each app I override method django_db_setup, but in this case, the data is not deleted after passing the tests in the module, and become available for other applications.
In theory, with the help of yield you can delete all the data after passing the tests.
#pytest.fixture(scope='module')
def django_db_setup(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
call_command('loaddata', './apps/accounts/fixtures/accounts.json')
call_command('loaddata', './apps/activation/fixtures/activation.json')
call_command('loaddata', './apps/questionnaire/fixtures/questionnaire.json')
yield
# delete test data
Second: in the class with tests write such a setup
#pytest.fixture(autouse=True)
def setup(self, db):
call_command('loaddata', './apps/accounts/fixtures/accounts.json')
call_command('loaddata', './apps/activation/fixtures/activation.json')
call_command('loaddata', './apps/questionnaire/fixtures/questionnaire.json')
In this case, the data will be loaded exactly as many times as there will be tests in the module, which also seems to be not quite correct.
I did something like this in my own tests :
from pytest_django.fixtures import _django_db_fixture_helper
#pytest.fixture(autoscope='module')
def setup_db(request, django_db_setup, django_db_blocker):
_django_db_fixture_helper(request,·django_db_blocker)
call_command('loaddata', 'path/to/fixture.json')
I think that pytest_django should export the _django_db_fixture_helper in its official API as a factory fixture.
I am testing the User message function of a web solution using pytest + selenium. The tests will generate a test message to a test user, and then log in that user to verify that the message indeed is displaying for that user.
I need to generate those messages through an internal API.
In order to be able to access this API, I first have to generate an AUTH token through a different API.
So the test scenario is basically:
At test startup, generate a new AUTH token through an API helper function.
Send a request to another API to set a new message (requires the AUTH token)
Send a request to yet another API to "map" this message to a specified user (requires the AUTH token)
Log in test user and verify that the new message is indeed displaying.
My problem is that I want to avoid creating a new AUTH token every time every test within my test class is run - I want to create a new token once that all tests use within the same test run.
What is the smartest solution to generate one new access token when invoking all tests?
Right now I have come up with something like this, which will generate a new token every time any individual test is run:
import pytest
import helpers.api_access_token_helper as token_helper
import helpers.user_message_generator_api_helper as message_helper
import helpers.login_helper as login_helper
import helpers.popup_helper as popup_helper
class TestStuff(object):
#pytest.yield_fixture(autouse=True)
def run_around_tests(self):
yield token_helper.get_api_access_token()
def test_one(self, run_around_tests):
auth_token = run_around_tests
message_helper.create_new_message(auth_token, some_message_data)
message_helper.map_message_to_user(auth_token, user_one["user_id"])
login_helper.log_in_user(user_one["user_name"], user_one["user_password"])
assert popup_helper.user_message_is_displaying(some_message_data["title"])
def test_two(self, run_around_tests):
auth_token = run_around_tests
message_helper.create_new_message(auth_token, some_other_message_data)
message_helper.map_message_to_user(auth_token, user_two["user_id"])
login_helper.log_in_user(user_two["user_name"], user_two["user_password"])
assert popup_helper.user_message_is_displaying(some_other_message_data["title"])
I have laborated back and forth a bit with the "run-around-tests" fixture but havent been able to find a solution.
You have to adapt the fixture scope to cache its results for all tests in test run (scope='session'), all tests in module (scope='module'), all tests in class (old unittest-style tests only, scope='class'), or for a single test (scope='function'; this the default one). Examples:
fixture function
#pytest.fixture(scope='session')
def token():
return token_helper.get_api_access_token()
class Tests(object):
def test_one(self, token):
...
def test_two(self, token):
...
class OtherTests(object):
def test_one(self, token):
...
The token will be calculated once when first requested and kept in cache throughout the whole test run, so all three tests Tests::test_one, Tests::test_two and OtherTests::test_one will share the same token value.
fixture class method
If you intend to write old-style test classes instead of test functions and want the fixture to be a class method (like it is in your code), note that you can only use the class scope, so that the fixture value is shared only between the tests in the class:
class TestStuff(object):
#pytest.fixture(scope='class')
def token(self):
return token_helper.get_api_access_token()
def test_one(self, token):
...
def test_two(self, token):
...
Stuff aside:
pytest.yield_fixture is deprecated and replaced by pytest.fixture;
you don't need to set autouse=True because you explicitly request the fixture in the test parameters. It will be called anyway.
You can add a scope="module" parameter to the #pytest.fixture.
According to pytest documentation:
Fixtures requiring network access depend on connectivity and are
usually time-expensive to create. Extending the previous example, we
can add a scope="module" parameter to the #pytest.fixture invocation
to cause a smtp_connection fixture function, responsible to create a
connection to a preexisting SMTP server, to only be invoked once per
test module (the default is to invoke once per test function).
Multiple test functions in a test module will thus each receive the
same smtp_connection fixture instance, thus saving time. Possible
values for scope are: function, class, module, package or session.
# content of conftest.py
import pytest
import smtplib
#pytest.fixture(scope="module")
def smtp_connection():
return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
Fixture scopes
Fixtures are created when first requested by a test, and are destroyed based on their scope:*
function: the default scope, the fixture is destroyed at the end of the test.
class: the fixture is destroyed during teardown of the last test in the class.
module: the fixture is destroyed during teardown of the last test in the module.
package: the fixture is destroyed during teardown of the last test in the package.
session: the fixture is destroyed at the end of the test session.
Note: Pytest only caches one instance of a fixture at a time, which means that when using a parametrized fixture, pytest may invoke a fixture more than once in the given scope.
I have a Flask app and it has a before_first_request method defined. The method loads some cached data for the application. I am trying to run some unit tests, and the cached data is in the way. How can I mock the method.
#app.before_first_request
def load_caches():
print "loading caches..."
# cache loading here.
in my test file, I define a global test_client as follows:
from unittest import TestCase
from .. import application
import mock
test_app = application.app.test_client()
My test classes follow that. The issue is that my test_app loads the cache and I need to mock that in my tests.
You can manually remove hooks in your test client:
test_app = application.app.test_client()
test_app.before_first_request_funcs = []
I'm surprised nobody gave a solution to this. I figure it may be useful to someone else at least. This may be a workaround, though I found this the easiest. In my case, I was testing a standalone function, not a method.
I was banging my head on your question too. I found that installing the python undecorated library and importing it in the file executing unit tests did the trick. Then after doing that, calling the undecorated method call inside of the SetUp method (before running the test_client())Something like this:
In test_my_module.py
from my_app import app, my_module
from undecorated import undecorated
class MyTestClass(unittest.TestCase):
def setUp(self):
undecorated(my_module.my_function)
# we are doing this before anything else due to the decorator's nature
# my_function has the #before_first_request decorator.
# Other setUp code below
self.client = app.test_client()
# ...
I did not find a way to mock the function directly, but i can mock functions called within it:
#app.before_first_request
def before_first_request():
load_caches()
def load_caches():
print "loading caches..."
I'm mocking out an API using unittest.mock. My interface is a class that uses requests behind the scene. So I'm doing something like this:
#pytest.fixture
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some assertion or another
def test_my_thing_two(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some other assertions that are different
As you can probably see, I've got the same first line in both of those tests and that smells like it's not quite DRY enough for me.
I'd love to be able to do something like:
def test_my_thing_one(mock_requests, logged_in_api):
mock_requests.get.return_value = ...
Rather than have to unpack those values, but I'm not sure if there's a way to reliably do that using pytest. If it's in the documentation for fixtures I've totally missed it. But it does feel like there should be a right way to do what I want to do here.
Any ideas? I'm open to using class TestGivenLoggedInApiAndMockRequests: ... if I need to go that route. I'm just not quite sure what the appropriate pattern is here.
It is possible to achieve exactly the result you want by using multiple fixtures.
Note: I modified your example minimally so that the code in my answer is self-contained, but you should be able to adapt it to your use case easily.
In myapi.py:
import requests
class MyApi:
def get_uuid(self):
return requests.get('http://httpbin.org/uuid').json()['uuid']
In test.py:
from unittest import mock
import pytest
from myapi import MyApi
FAKE_RESPONSE_PAYLOAD = {
'uuid': '12e77ecf-8ce7-4076-84d2-508a51b1332a',
}
#pytest.fixture
def mocked_requests():
with mock.patch('myapi.requests') as _mocked_requests:
response_mock = mock.Mock()
response_mock.json.return_value = FAKE_RESPONSE_PAYLOAD
_mocked_requests.get.return_value = response_mock
yield _mocked_requests
#pytest.fixture
def api():
return MyApi()
def test_requests_was_called(mocked_requests, api):
assert not mocked_requests.get.called
api.get_uuid()
assert mocked_requests.get.called
def test_uuid_is_returned(mocked_requests, api):
uuid = api.get_uuid()
assert uuid == FAKE_RESPONSE_PAYLOAD['uuid']
def test_actual_api_call(api): # Notice we don't mock anything here!
uuid = api.get_uuid()
assert uuid != FAKE_RESPONSE_PAYLOAD['uuid']
Instead of defining one fixture that returns a tuple, I defined two fixtures, which can independently be used by the tests. An advantage of composing fixtures like that is that they can be used independently, e.g. the last test actually calls the API, simply by virtue of not using mock_requests fixture.
Note that -- to answer the question title directly -- you could also make mocked_requests a prerequisite of the api fixture by simply adding it to the parameters, like so:
#pytest.fixture
def api(mocked_requests):
return MyApi()
You will see that it works if you run the test suite, because test_actual_api_call will no longer pass.
If you make this change, using the api fixture in a test will also mean executing it in the context of mocked_requests, even if you don't directly specify the latter in your test function's arguments. It's still possible to use it explicitly, e.g. if you want to make assertions on the returned mock.
If you can not afford to easily split your tuple fixture into two independent fixtures, you can now "unpack" a tuple or list fixture into other fixtures using my pytest-cases plugin as explained in this answer.
Your code would look like:
from pytest_cases import pytest_fixture_plus
#pytest_fixture_plus(unpack_into="api,mocked_requests")
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(api, mocked_requests):
... # some assertion or another
def test_my_thing_two(api, mocked_requests):
... # some other assertions that are different