I recently switched from Django's TestCase classes to the third party pytest system. This allowed me to speed up my test suite significantly (by a factor of 5), and has been a great experience overall.
I Do have issues with selenium though. I've made a simple fixture to include the browser in my tests
#pytest.yield_fixture
def browser(live_server, transactional_db, admin_user):
driver_ = webdriver.Firefox()
driver_.server_url = live_server.url
driver_.implicitly_wait(3)
yield driver_
driver_.quit()
But for some reason, The database is not properly reset between tests. I have a test similar to
class TestCase:
def test_some_unittest(db):
# Create some items
#...
def test_with_selenium(browser):
# The items from the above testcase exists in this testcase
The objects created in test_some_unittest are present in test_with_selenium. I'm not really sure how to solve this.
switch from django.test.TestCase in favour of pytest shall mean employing pytest-django plugin and your tests should look like this:
class TestSomething(object):
def setup_method(self, method):
pass
#pytest.mark.django_db
def test_something_with_dg(self):
assert True
that above all means no django.test.TestCase (which is a derivation from python std unittest framework) inheritance.
the #pytest.mark.django_db means your test case will run in a transaction which will be rolled back once the test case is over.
first occurrence of the django_db marker will also trigger django migrations.
beware using database calls in special pytest methods such as setup_method for it's unsupported and otherwise problematic:
django-pytest setup_method database issue
def _django_db_fixture_helper(transactional, request, _django_cursor_wrapper):
if is_django_unittest(request):
return
if transactional:
_django_cursor_wrapper.enable()
def flushdb():
"""Flush the database and close database connections"""
# Django does this by default *before* each test
# instead of after.
from django.db import connections
from django.core.management import call_command
for db in connections:
call_command('flush', verbosity=0,
interactive=False, database=db)
for conn in connections.all():
conn.close()
request.addfinalizer(_django_cursor_wrapper.disable)
request.addfinalizer(flushdb)
else:
if 'live_server' in request.funcargnames:
return
from django.test import TestCase
_django_cursor_wrapper.enable()
_django_cursor_wrapper._is_transactional = False
case = TestCase(methodName='__init__')
case._pre_setup()
request.addfinalizer(_django_cursor_wrapper.disable)
request.addfinalizer(case._post_teardown)
As i see you use pytest-django (which is fine)
From this code of it, it doesn't flush the db if it's non-transactional db.
So in your 'other' tests you'd have to use transactional_db and then it will be isolated as you wanted.
So your code will look like:
class TestCase:
def test_some_unittest(transactional_db):
# Create some items
#...
def test_with_selenium(browser):
# The items from the above testcase does not exist in this testcase
Hovewer, an improvement to pytest-django could be that flush is performed before, not after yield of the fixture value, which makes much more sense. It's not so important what's in teardown, it's important that set up is correct.
As a side suggestion, for browser fixture you can just use pytest-splinter plugin
Related
Im using dependency_injection python lib and having issues to test my use_cases.
I have this conatiners:
class TestAdapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
MockRepository
).add_args(1)
... # More adapters here
class Adapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
RedisRepository
).add_args(1)
...
class UseCases(containers.DeclarativeContainer):
adapters = providers.DependenciesContainer()
example_use_case: ExampleUseCaseInterface = providers.Factory(
ExampleUseCase, repository=adapters.respository
)
...
def get_adapter():
import os
if os.getenv("ENVIRONMENT") == "TEST":
return TestAdapters()
return Adapters()
In my API, setting the dependecies like this:
# Fast api main.py file
def get_application() -> FastAPI:
container = UseCases(adapters=get_adapter())
container.wire(modules=["adapters.api.v1.controllers"])
application = FastAPI(
title=settings.PROJECT_NAME,
)
application.include_router(router)
application.middleware("http")(catch_exceptions_middleware)
return application
But I'm trying to test my use cases in isolation, without necessarily going through the api. So my strategy is to create a use_cases fixture, which returns UseCases container with the right Adapters container setted:
# In global conftest.py
#pytest.fixture
def use_cases():
container = UseCases(adapters=TestAdapters())
container.wire(modules=["adapters.api.v1.controllers"])
yield container
container.unwire()
But when I run any test with use_cases as fixture I have this error:
dependency_injector.errors.Error: Dependency "UseCases.adapters.respository" is not defined
What am I missing here? There is a better way to do it?
There is nothing wrong with my implementation. My problem was that there was a typo on my use_case Factory on UseCases container.
I am trying to use pytest-xdist for running parallel tests.
It works and I see improved performance.
To improve further more, I wanted to provide it multiple databases by using django_db_modify_db_settings_xdist_suffix
I override that functions in my conftest.py.
Since I have four workers, I created four databases manually. Then I use conftest to modify the settings.DATABASES to test for DBNAME against DBNAME_
I verified that my settings.DATABASES has changed and is correct.
But the queries are still going to old db DBNAME (which is no longer in my settings.DATABASES)
What could be going wrong?
Do I need to make any other change? Or change in conftest.py fixture is enough?
Thanks in advance for any help or direction.
EDIT:
My conftest:py has lot of stuff.
At the end of django_db_modify_db_settings_xdist_suffix if I log settings.DATABASES it shows me correct & expected info. But the queries still go to different db.
conftest.py is same in both runs (pytest -n4 or pytest). Since it depends on xdist_suffix it modifies the settings.DATABASE value in "-n 4" run only.
Two relevant functions which I think are important here:
#pytest.fixture(scope="session")
def django_db_setup(
request,
django_test_environment,
django_db_blocker,
django_db_use_migrations,
django_db_keepdb,
django_db_createdb,
django_db_modify_db_settings,
):
pass
And
#pytest.fixture(scope="session")
def django_db_modify_db_settings_xdist_suffix(request):
from django.conf import settings
default = settings.DATABASES["default"].copy()
settings.DATABASES.clear()
settings.DATABASES["default"] = default
xdist_suffix = None
xdist_worker_id = get_xdist_worker_id(request)
if xdist_worker_id != 'master':
xdist_suffix = xdist_worker_id
if xdist_suffix:
for db_settings in settings.DATABASES.values():
test_name = db_settings.get("TEST", {}).get("NAME")
if not test_name:
test_name = "test_{}".format(db_settings["NAME"])
db_settings.setdefault("TEST", {})
db_settings["TEST"]["NAME"] = "{}_{}".format(test_name, xdist_suffix)
db_settings["NAME"] = db_settings["TEST"]["NAME"]
I am doing something similar here to what you have done, but it may seem simpler or more elegant. To my mind, it seems that the context is more on the side of 'Django' than pytest-xdist.
I have used pytest-xdist to scale concurrent stress testing and the hook that seems most relevant for your question using gateway-id to send a setting to the remote worker which allowed distinction between nodes.
def pytest_configure_node(self, node: WorkerController) -> None:
"""set something peculiar for each node."""
node.workerinput['SPECIAL'] = get_db_for_node(node.gateway.id)
Please try to implement the function as shown for get_db_for_node(gateway_id: str) -> str:
Then in the worker you could perhaps leverage config.workerinput to access the special mentioned above:
#pytest.fixture(scope="session")
def special(pytestconfig: Config) -> str:
if not hasattr(pytestconfig, 'workerinput'):
log.exception('fixture requires xdist')
return ''
return pytestconfig.workerinput['SPECIAL']
As per the pytest documentation, it possible to override the default temporary directory setting as follows:
py.test --basetemp=/base_dir
When the tmpdir fixture is then used in a test ...
def test_new_base_dir(tmpdir):
print str(tmpdir)
assert False
... something like the following would then be printed to the screen:
/base_dir/test_new_base_dir_0
This works as intended and for certain use cases can be very useful.
However, I would like to be able to change this setting on a per-test (or perhaps I should say a "per-fixture") basis. Is such a thing possible?
I'm close to just rolling my own tmpdir based on the code for the original, but would rather not do this -- I want to build on top of existing functionality where I can, not duplicate it.
As an aside, my particular use case is that I am writing a Python module that will act on different kinds of file systems (NFS4, etc), and it would be nice to be able to yield the functionality of tmpdir to be able to create the following fixtures:
def test_nfs3_stuff(nfs3_tmpdir):
... test NFS3 functionality
def test_nfs4_stuff(nfs4_tmpdir):
... test NFS4 functionality
In the sources of TempdirFactory the .config.option.basetemp is used as the attribute to store the basetemp. So you can directly set it before the usage:
import pytest
import time
import os
def mktemp_db(tmpdir_factory, db):
basetemp = None
if 'PYTEST_TMPDIR' in os.environ:
basetemp = os.environ['PYTEST_TMPDIR']
if basetemp:
tmpdir_factory.config.option.basetemp = basetemp
if db == "db1.db":
tmpdb = tmpdir_factory.mktemp('data1_').join(db)
elif db == "db2.db":
tmpdb = tmpdir_factory.mktemp('data2_').join(db)
return tmpdb
#pytest.fixture(scope='session')
def empty_db(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db1.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
#pytest.fixture(scope='session')
def empty_db2(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db2.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
def test_empty_db(empty_db):
pass
def test_empty_db2(empty_db2):
pass
-
>set PYTEST_TMPDIR=./tmp
>python.exe -m pytest -q -s test_my_db.py
* c:\tests\tmp\data1_0\db1.db
.* c:\tests\tmp\data2_0\db2.db
.
2 passed in 10.03 seconds
There didn't appear to be a nice solution to the problem as posed in the question so I settled on making two calls to py.test:
Passing in a different --basetemp for each.
Marking (using #pytest.mark.my_mark) which tests needed the special treatment of using a non-standard basetemp.
Passing -k my_mark or -k-my_mark into each call.
When running a pytest unit test against a CherryPy server, using a cherrypy.helper.CPWebCase sub-class, how can I set data for the session object? I tried just calling cherrypy.session['foo']='bar' like I would if I was really in a cherrypy call, but that just gave an "AttributeError: '_Serving' object has no attribute 'session'"
For reference, a test case might look something like this (pulled from the CherryPy Docs with minor edits):
import cherrypy
from cherrypy.test import helper
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
def setup_server():
cherrypy.tree.mount(Root(), "/", {'/': {'tools.sessions.on': True}})
setup_server = staticmethod(setup_server)
def check_two_plus_two_equals_four(self):
#<code to set session variable to 2 here>
# This is the question: How do I set a session variable?
self.getPage("/")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
And the handler might look something like this (or anything else, it makes no difference whatsoever):
class Root:
#cherrypy.expose
def test_handler(self):
#get a random session variable and do something with it
number_var=cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var+2
return str(number_var)
It's safe to assume that the config is correct, and sessions work as expected.
I could, of course, write a CherryPy page that takes a key and value as arguments, and then sets the specified session value, and call that from my test code (EDIT: I've tested this, and it does work). That, however, seems kludgy, and I'd really want to limit it to testing only somehow if I went down that road.
What you are trying to achieve is usually referred as mocking.
While running tests you'd usually want to 'mock' some of resources you access with dummy objects having same interface (duck typing). This may be achieved with monkey patching. To simplify this process you may use unittest.mock.patch as either context manager or method/function decorator.
Please find below the working example with context manager option:
==> MyApp.py <==
import cherrypy
class Root:
_cp_config = {'tools.sessions.on': True}
#cherrypy.expose
def test_handler(self):
# get a random session variable and do something with it
number_var = cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var + 2
return str(number_var)
==> cp_test.py <==
from unittest.mock import patch
import cherrypy
from cherrypy.test import helper
from cherrypy.lib.sessions import RamSession
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
#staticmethod
def setup_server():
cherrypy.tree.mount(Root(), '/', {})
def test_check_two_plus_two_equals_four(self):
# <code to set session variable to 2 here>
sess_mock = RamSession()
sess_mock['Number'] = 2
with patch('cherrypy.session', sess_mock, create=True):
# Inside of this block all manipulations with `cherrypy.session`
# actually access `sess_mock` instance instead
self.getPage("/test_handler")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
Now you may safely run test as follows:
$ py.test -sv cp_test.py
============================================================================================================ test session starts =============================================================================================================
platform darwin -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- ~/.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/bin/python3.5
cachedir: .cache
rootdir: ~/src/cptest, inifile:
collected 2 items
cp_test.py::SimpleCPTest::test_check_two_plus_two_equals_four PASSED
cp_test.py::SimpleCPTest::test_gc <- ../../.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/lib/python3.5/site-packages/cherrypy/test/helper.py PASSED
I'm converting a code base from Ruby to Python. In Ruby/RSpec I wrote custom "matchers" which allow me to black-box test web services like this:
describe 'webapp.com' do
it 'is configured for ssl' do
expect('www.webapp.com').to have_a_valid_cert
end
end
I'd like to write code to extend a Python testing framework with the same functionality. I realize that it will probably not look the same, of course. It doesn't need to be BDD. "Assert..." is just fine. Is pytest a good candidate for extending? Are there any examples of writing extensions like these?
Yes, pytest is good framework for doing what are you needs. We are using pytest with requests and PyHamcrest. Look at this example:
import pytest
import requests
from hamcrest import *
class SiteImpl:
def __init__(self, url):
self.url = url
def has_valid_cert(self):
return requests.get(self.url, verify=True)
#pytest.yield_fixture
def site(request):
# setUp
yield SiteImpl('https://' + request.param)
# tearDown
def has_status(item):
return has_property('status_code', item)
#pytest.mark.parametrize('site', ['google.com', 'github.com'], indirect=True)
def test_cert(site):
assert_that(site.has_valid_cert(), has_status(200))
if __name__ == '__main__':
pytest.main(args=[__file__, '-v'])
The code above uses parametrization for fixture site. Also yeld_fixture give you possibility to setUp and tearDown. Also you can write inline matcher has_status, which can used for easy read test assert.