Specify configuration settings per pytest test - python

I manage my configuration for a Python application with https://github.com/theskumar/python-dotenv and I use pytest for my tests.
For a particular set of tests, I want custom configuration specific to each test. Now I know of https://github.com/quiqua/pytest-dotenv, which gives me the ability to set a config per environment (prod/test), but I want finer granularity on a per-test basis. So far, I've handled this by mocking the Config object that contains all of my configuration. This is messy because for each test, I need to mock this Config object for each module where it's loaded.
Ideally, I'd have something like this:
def test_1(config):
config.foo = 'bar'
run_code_that_uses_config()
def test_2(config):
config.foo = 'bleh'
run_code_that_uses_config()

I ended up using monkeypatch and going with this approach:
from my_library.settings import Config
#pytest.fixture
def config(monkeypatch):
monkeypatch.setattr(Config, 'foo', 'bar')
return Config
def test_1(config, monkeypatch):
monkeypatch.setattr(config, 'foo', 'some value specific to this test')
run_code_that_uses_config()
With monkeypatch I was able to pass around the Config object via a pytest fixture to create a default and override it for each test. The code that uses the config was able to pick up the change. Previously, I'd used patch from the standard mock library, but I had to patch every single place where the config was looked up (https://docs.python.org/3/library/unittest.mock.html#where-to-patch), which made maintenance difficult.

I had similar situation, however I set as a config variable and use the current_app to modify at the test, for eg.
import pytest
from flask import current_app
#pytest.mark.usefixtures(...)
def test_my_method(client, session)
# Initially, it was False, but I modified here as True
current_app.config['MY_CONFIG_VARIABLE'] = True
...
# Then, do your test here
Hope it helps!

Related

In pytest, is there a way to mock an already imported variable without having to patch for every single test?

So to be specific, we're using SqlAlchemy and session, and we use define it only once at say, at utils/sessions.py. like following :
#utils/sessions.py
from sqlalchemy.orm import scoped_session, sessionmaker
session = scoped_session (sessionmaker(
....
))
and we use this on our actual repository layer, for example :
#app/user/users.py
from utils.sessions import session
class UserRepo :
def get_user(self,id) :
return session.query(User).filter(User.id=id)
Now I'm trying to perform some sort of unit/integration test, and I need to mock the session variable that came from utils.sessions in practically everywhere within the test (hence I would like to use an autouse fixture mocking util.sessions.session).
so far, the only thing that seemed to be worked was to patch app.user.users.session within the test code itself , While what i want is to basically mock every single occurance of session in different tests too. like, if i want to test app/articles , I wouldn't want to repeatedly type with patch('apps.articles.article.session' unnecessarily.
so ideally, I would like to have something like this and expect all my occurance of session imported in individual repos to work properly :
# tests/conftest.py
def mocked_session() :
return scoped_session("my_new_parameters_for_test_purpose_only")
#pytest.fixture(autouse=true)
def mock_all_sessions():
with patch("utils.sessions.session", mocked_session) as s :
yield s
but this didn't work for me. (It's strange for me because patching socket.socket successfully blocks networking attempt.)
it would be nice if there was a way to mock utils.sessions.session instead of having to manually patch apps/{all apps i have}/{all .py files}.session/.
Is there a way to do this? if not, would there be an alternative to having to manually write down the module path for every single test?
See the unittest documentation https://docs.python.org/3/library/unittest.mock.html#where-to-patch
The main issue is that when you patch, you are only patching the name a variable in the context of a module. When users.py does from utils.sessions import session it creates its own reference to session. i.e. there are two variable names which point to the same object.
utils.sessions.session = <your session object>
app.user.users.session = <your session object>
When you patch just utils.sessions.session you are only replacing the contents of the name in that module. e.g.
utils.sessions.session = <the patched mock object>
app.user.users.session = <your session object>
If you instead do the import as from utils import sessions then you would reference the object as sessions.session, which means when you patch the value on the sessions module, it will find the correct override to the patched value at runtime.
IMO This becomes awkward because it imposes a certain style of performing imports on you, when both should be technically valid. An easier pattern might be to create a proxy method inside of sessions.py that will put a barrier in between importers of the module and accessing the session object. That would look like this:
_session = scoped_session(sessionmaker(...))
def session():
return _session
used like:
from utils.sessions import session
class UserRepo:
def get_user(self,id):
return session().query(User).filter(User.id=id)
Now when you go to patch, you only have to patch utils.sessions._session, and it doesn't matter how you do the import. The reference to the function session() remains unchanged, but the object it would return is changed for everyone.
The problem is that you mock session only in one namespace, i.e. utils.sessions.__dict__, but the name "session" exists in many other namespaces e.g. apps.articles.article.__dict__.
There is a low-tech solution possible by simply changing import statements - instead of
from utils.sessions import session
You could use
from utils import sessions
Then you only need to mock in the one namespace (where the name is looked up) as you're already doing. The caveat is that all those submodules now need to use sessions.session instead of just session, so that they are all looking in the same namespace where you're applying the patch.

How do i mock an external libraries' classes/functions such as yaml.load() or Box() in python

How would i go about testing the following class and its functions?
import yaml
from box import Box
from yaml import SafeLoader
class Config:
def set_config_path(self):
self.path = r"./config/datasets.yaml"
return self.path
def create_config(self):
with open(r"./config/datasets.yaml") as f:
self.config = Box(yaml.load(f, Loader=SafeLoader))
return self.config
These are the current tests I have created so far, but i am struggling with the final function:
import unittest
from unittest.mock import mock_open, patch
from src.utils.config import Config
class TestConfig(unittest.TestCase):
def setUp(self):
self.path = r"./config/datasets.yaml"
def test_set_config_path(self):
assert Config.set_config_path(self) == self.path
#patch("builtins.open", new_callable=mock_open, read_data="data")
def test_create_config(self, mock_file):
assert open(self.path).read() == "data"
How would i go about testing/mocking the Box() and yaml.load() methods.
I have tried mocking where the Box and yaml.load() functions are used in the code - however i dont fully understand how this works.
Ideally I'd want to be able to pass a fake file to the with open() as f:, which then is read by Box and yaml.load to output a fake dictionary config.
Thanks!
The thing to remember about unit tests is that the goal is to test public interfaces of YOUR code. So to mock a third parties code is not really a good thing to do though in python it can be done but would be alot of monkey patching and other stuff.
Also creating and deleting files in a unit test is fine to do as well. So you could just create a test version of the yaml file and store it in the unit tests directory. During a test load the file and then do assertions to check that it was loaded properly and returned.
You wouldn't do a unit test checking if Box was initialized properly cause that should be in another test or test case. Unless its a third party then you would have to make sure it was initialized properly cause it's not your code.
So create a test file, open it and load it as yaml then pass it into Box constructor. Do assertions to make sure those steps completed properly. No need to mock yaml or Box.

pytest new test for every iteration | for loop | parametrize fixture

Basically, I'm trying to do a test for each iteration of a list of routes to check that the web pages associated with a particular function return a valid status code.
I want something along the lines of this:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.yield_fixture
def app():
# App context and specific overrides for test env
yield get_app()
#pytest.yield_fixture
def client(app):
yield app.test_client()
#pytest.yield_fixture
def routes(app):
routes = [
'foo',
'bar',
# There's quite a lot of function names here
]
with app.app_context():
for n, route in enumerate(routes):
routes[n] = url_for(route)
# yield url_for(route) #NOTE: This would be ideal, but not allowed.
# convert the routes from func names to actual routes
yield routes
#pytest.mark.parametrize('route', routes)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
I read up that you can't mix parametrize with a fixture as an argument due to something along the lines of interpretation/load/execution order, although, how is this solved in terms of 'best practice'?
I saw a solution where you can generate tests from a function directly, and that seems extremely flexible and might be along the lines of what I want Passing pytest fixture in parametrize (Although I can't use call a fixture decorated function directly, so probably not)
Although, I'm new to pytest and I'd love to see more examples of how to generate tests or perform multiple tests in an iteration with little-to-no restrictions while adhering to proper pytest styling and the DRY principle. (I know about conftest.py)
I'd prioritize versatility/practicality over proper styling if that matters. (within reason, maintainability is a high priority too)
I want to be able to reference the solution to this problem to help guide how I tackle structuring my tests in the future, but I seem to keep hitting roadblocks/limitations or being told by pytest I can't do X solution the way I would expect/want too.
Relevant Posts:
DRY: pytest: parameterize fixtures in a DRY way
Generate tests from a function: Passing pytest fixture in parametrize
Very simply solution (doesn't apply to this case): Parametrize pytest fixture
Flask app context in PyTest: Testing code that requires a Flask app or request context
Avoiding edge-cases with multiple list fixtures: Why does Pytest perform a nested loop over fixture parameters
Pytest fixtures themselves can be parameterized, though not with pytest.mark.parametrize. (It looks like this type of question was also answered here.) So:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.fixture
def app():
app = get_app()
# app context stuff trimmed out here
return app
#pytest.fixture
def client(app):
client = app.test_client()
return client
#pytest.fixture(params=[
'foo',
'bar'
])
def route(request, app):
'''GET method urls that we want to perform mass testing on'''
with app.app_context():
return url_for(request.param)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
The documentation explains it this way:
Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture. Test functions usually do not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways.
Extending the previous example, we can flag the fixture to create two smtp_connection fixture instances which will cause all tests using the fixture to run twice. The fixture function gets access to each parameter through the special request object:
# content of conftest.py
import pytest import smtplib
#pytest.fixture(scope="module", params=["smtp.gmail.com", "mail.python.org"])
def smtp_connection(request):
smtp_connection = smtplib.SMTP(request.param, 587, timeout=5)
yield smtp_connection
print("finalizing {}".format(smtp_connection))
smtp_connection.close()
The main change is the declaration of params with #pytest.fixture, a list of values for each of which the fixture function will execute and can access a value via request.param. No test function code needs to change.
My current solution that I fumbled my way across is this:
import pytest
from flask import url_for
from myflaskapp import get_app
#pytest.fixture
def app():
app = get_app()
# app context stuff trimmed out here
return app
#pytest.fixture
def client(app):
client = app.test_client()
return client
def routes(app):
'''GET method urls that we want to perform mass testing on'''
routes = ['foo', 'bar']
with app.app_context():
for n, route in enumerate(routes):
routes[n] = url_for(route)
return routes
#pytest.mark.parametrize('route', routes(get_app()))
#NOTE: It'd be really nice if I could use routes as a
# fixture and pytest would handle this for me. I feel like I'm
# breaking the rules here doing it this way. (But I don't think I actually am)
def test_page_load(client, route):
assert client.get(route.endpoint).status_code == 200
My biggest issue with this solution is that, I can't call the fixture directly as a function, and this solution requires either that, or doing all the work my fixture does outside of the fixture, which is not ideal. I want to be able to reference this solution to tackle how I structure my tests in the future.
FOR ANYONE LOOKING TO COPY MY SOLUTION FOR FLASK SPECIFICALLY:
My current solution might be worse for some people than it is for me, I use a singleton structure for my get_app() so it should be fine if get_app() is called many times in my case, because it will call create_app() and store the app itself as a global variable if the global variable isn't already defined, basically emulating the behavior of only calling create_app() once.

Is there a way to modify pytest config object from regular keyword?

I want to print, log and make report in my pytest framework.
I am creating a config object in pytest_configure as follows
conftest.py
def pytest_configure(config):
config.logs = []
Then I am creating a fixture to modify this object
#pytest.fixture(scope="function", autouse=True)
def print_info(request):
return request.config.logs
In the test file I am calling this fixture to modify object
test_logs.py
def test_dummy(print_info):
print_info.append("I am in test_dummy")
I want to modify the config object without passing that object in test case.
For example I want to do following
from conftest import print_log
def test_dummy():
print_log("I am in test_dummy")
and in conftest.py we can define that fucntion to modify the config object
conftest.py
def print_log(message):
#function to modify config object
Wouldn't logging module help here?
You can configure log record to contain exactly the information you want e.g. module and test function in which message was logged.
Logging has great features like filtering by log level, message content, writing different records to separate log files with various text formats etc.
I would encourage you to take a look at:
https://docs.pytest.org/en/latest/logging.html

Is it possible to use a fixture in another fixture and both in a test?

I'm mocking out an API using unittest.mock. My interface is a class that uses requests behind the scene. So I'm doing something like this:
#pytest.fixture
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some assertion or another
def test_my_thing_two(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some other assertions that are different
As you can probably see, I've got the same first line in both of those tests and that smells like it's not quite DRY enough for me.
I'd love to be able to do something like:
def test_my_thing_one(mock_requests, logged_in_api):
mock_requests.get.return_value = ...
Rather than have to unpack those values, but I'm not sure if there's a way to reliably do that using pytest. If it's in the documentation for fixtures I've totally missed it. But it does feel like there should be a right way to do what I want to do here.
Any ideas? I'm open to using class TestGivenLoggedInApiAndMockRequests: ... if I need to go that route. I'm just not quite sure what the appropriate pattern is here.
It is possible to achieve exactly the result you want by using multiple fixtures.
Note: I modified your example minimally so that the code in my answer is self-contained, but you should be able to adapt it to your use case easily.
In myapi.py:
import requests
class MyApi:
def get_uuid(self):
return requests.get('http://httpbin.org/uuid').json()['uuid']
In test.py:
from unittest import mock
import pytest
from myapi import MyApi
FAKE_RESPONSE_PAYLOAD = {
'uuid': '12e77ecf-8ce7-4076-84d2-508a51b1332a',
}
#pytest.fixture
def mocked_requests():
with mock.patch('myapi.requests') as _mocked_requests:
response_mock = mock.Mock()
response_mock.json.return_value = FAKE_RESPONSE_PAYLOAD
_mocked_requests.get.return_value = response_mock
yield _mocked_requests
#pytest.fixture
def api():
return MyApi()
def test_requests_was_called(mocked_requests, api):
assert not mocked_requests.get.called
api.get_uuid()
assert mocked_requests.get.called
def test_uuid_is_returned(mocked_requests, api):
uuid = api.get_uuid()
assert uuid == FAKE_RESPONSE_PAYLOAD['uuid']
def test_actual_api_call(api): # Notice we don't mock anything here!
uuid = api.get_uuid()
assert uuid != FAKE_RESPONSE_PAYLOAD['uuid']
Instead of defining one fixture that returns a tuple, I defined two fixtures, which can independently be used by the tests. An advantage of composing fixtures like that is that they can be used independently, e.g. the last test actually calls the API, simply by virtue of not using mock_requests fixture.
Note that -- to answer the question title directly -- you could also make mocked_requests a prerequisite of the api fixture by simply adding it to the parameters, like so:
#pytest.fixture
def api(mocked_requests):
return MyApi()
You will see that it works if you run the test suite, because test_actual_api_call will no longer pass.
If you make this change, using the api fixture in a test will also mean executing it in the context of mocked_requests, even if you don't directly specify the latter in your test function's arguments. It's still possible to use it explicitly, e.g. if you want to make assertions on the returned mock.
If you can not afford to easily split your tuple fixture into two independent fixtures, you can now "unpack" a tuple or list fixture into other fixtures using my pytest-cases plugin as explained in this answer.
Your code would look like:
from pytest_cases import pytest_fixture_plus
#pytest_fixture_plus(unpack_into="api,mocked_requests")
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(api, mocked_requests):
... # some assertion or another
def test_my_thing_two(api, mocked_requests):
... # some other assertions that are different

Categories

Resources