I build an application that uses a database in the backend. For integration tests, I start the database in Docker and run a test suite with pytest.
I use a session scoped fixture with autouse=True to start the Docker container:
#pytest.fixture(scope='session', autouse=True)
def run_database():
# setup code skipped ...
# start container with docker-py
container.start()
# yield container to run tests
yield container
# stop container afterwards
container.stop()
I pass the database connection to the test functions with another session scoped fixture:
#pytest.fixture(scope='session')
def connection():
return Connection(...)
Now I can run a test function:
def test_something(connection):
result = connection.run(...)
assert result == 'abc'
However, I would like to run my test functions against multiple different versions of the database.
I could run multiple Docker containers in the run_database() fixture. How can I parametrize my test functions so that they run for two different connection() fixtures?
The answer by #Guy works!
I found another solution for the problem. It is possible to parametrize a fixture. Every test function that uses the fixture will run multiple times: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures
Thus, I parametrized the connection() function:
#pytest.fixture(scope='session', params=['url_1', 'url_2'])
def connection(request):
yield Connection(url=request.param)
Now every test function that uses the connection fixture runs twice. The advantage is that you do not have to change/adapt/mark the existing test functions.
You can send a function that yields connections and pass it with #pytest.mark.parametrize. If you change the the scope of run_database() to class it will run for every test
def data_provider():
connections = [Connection(1), Connection(2), Connection(3)]
for connection in connections:
yield connection
#pytest.fixture(scope='class', autouse=True)
def run_database():
container.start()
yield container
container.stop()
#pytest.mark.parametrize('connection', data_provider())
#pytest.mark.testing
def test_something(connection):
result = connection.run()
assert result == 'abc'
If you add #pytest.mark.parametrize('connection', data_provider()) to run_database() the connection will be passed to there as well.
Related
I'm using Python 3.9.7, databases 0.4.3, asyncpg 0.24.0.
Relevant files/snippets:
tests.py
from unittest import TextTestRunner, TestSuite, TestLoader
runner = TextTestRunner()
test_loader = TestLoader()
suite = TestSuite()
test_suite = add_tests_to_suite(suite) # <- All my tests are added here
runner.run(test_suite)
db.py
from databases import Database
dal = Database() # This is the "Databases" lib instance
some_test.py
from unittest import IsolatedAsyncioTestCase
from db import dal
class SomeTest(IsolatedAsyncioTestCase):
async def some_async_test(self):
try:
await dal.connect()
# Test logic happens here
finally:
await dal.disconnect()
The code above works, however, connecting and disconnecting on every unit test is taking around 400ms, which is very slow when dealing with a large amount of unit tests. What is the proper/recommended way of dealing with async database connections in the context of unit tests?
Things I tried:
Move dal.connect() to tests.py, but that file is not in the asyncio context, therefore I cannot await the connect() function.
Create an asyncio loop in tests.py just so I can await the connect() function, but this approach throws:
RuntimeWarning: coroutine 'IsolatedAsyncioTestCase._asyncioLoopRunner' was never awaited`
Run the function dal.connect() only once, rather than on every test, but it throws:
asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress
I'm developing a testing suite for a flask app using celery for processing background tasks.
I am working on integration tests and have been trying to configure a embedded live worker as per the documentation (https://docs.celeryproject.org/en/latest/userguide/testing.html)
conftest.py
#pytest.fixture(scope='session')
def celery_config():
return {
'broker_url': 'memory://localhost/',
'result_backend': 'memory://localhost/',
}
#pytest.fixture(scope='module')
def create_flask_app():
#drop all records in testDatabase before strting new test module
db = connect(host=os.environ["MONGODB_SETTINGS_TEST"], alias="testConnect")
for collection in db["testDatabase"].list_collection_names():
db["testDatabase"].drop_collection(collection)
db.close()
# Create a test client using the Flask application configured for testing
flask_app = create_app()
return flask_app
#pytest.fixture(scope='function')
def test_client(create_flask_app):
"""
Establish a test client for use within each test module
"""
with create_flask_app.test_client() as testing_client:
with create_flask_app.app_context():
yield testing_client
#pytest.fixture(scope='function')
def celery_app(create_flask_app):
from celery.contrib.testing import tasks
from app import celery
return celery
I'm trying to run the tests using local memory as the backend. Yet, the tasks hang and the test suite never finishes executing.
When I run the tests with a redis backend (and initialize redis in my development machine) everything works fine. But I'd like to not be dependent on redis when running the tests.
Am I doing something wrong with the setup? Does anyone have any idea on why the tasks are hanging?
I have the following test class for pytest:
class TestConnection(AsyncTestCase):
'''Integration test'''
#gen_test
def test_connecting_to_server(self):
'''Connecting to the TCPserver'''
client = server = None
try:
sock, port = bind_unused_port()
with NullContext():
server = EchoServer()
server.add_socket(sock)
client = IOStream(socket.socket())
#### HERE I WANT TO HAVE THE caplog FIXTURE
with ExpectLog(app_log, '.*decode.*'):
yield client.connect(('localhost', port))
yield client.write(b'hello\n')
# yield client.read_until(b'\n')
yield gen.moment
assert False
finally:
if server is not None:
server.stop()
if client is not None:
client.close()
Within this class apparently ExpectLog is not working so after a day of digging around in pytest's documentation I found that there is this caplog fixture that you can have inserted in you methods in order to access the captured logs. It seems to work if I have a test function to which I add the caplog argument but how do I make the caplog fixture available within the methods of a test class like the one above?
Although you can't pass fixtures as parameters to unittest test methods, you can inject them as instance attributes. Example:
# spam.py
import logging
def eggs():
logging.getLogger().info('bacon')
Test for spam.eggs():
# test_spam.py
import logging
import unittest
import pytest
import spam
class SpamTest(unittest.TestCase):
#pytest.fixture(autouse=True)
def inject_fixtures(self, caplog):
self._caplog = caplog
def test_eggs(self):
with self._caplog.at_level(logging.INFO):
spam.eggs()
assert self._caplog.records[0].message == 'bacon'
I am working on a flask application that interacts with redis. This applciation is deployed on heroku, with a redis add on.
When I am doing some testing with the interaction, I am not able to get the key value pair that I just set. Instead, I always get None as a return type. Here is the example:
import Flask
import redis
app = Flask(__name__)
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
redis = redis.from_url(redis_url)
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
print redis.get("test") # print None here
return "what the freak"
if __name__ == "__main__":
app.run(host='0.0.0.0')
As shown above, the test route will print None, means the value is not set. I am confused. When I test the server on my local browser it works, and when I tried interacting with redis using heroku python shell it works too.
testing with python shell:
heroku run python
from server import redis
redis.set('test', 'i am here') # return True
redis.get('test') # return i am here
I am confused now. How should I properly interact with redis using Flask?
Redis-py by default constructs a ConnectionPool client, and this is probably what the from_url helper function is doing. While Redis itself is single threaded, the commands from the connection pool have no guaranteed order of execution. For a single client, construct a redis.StrictRedis client directly, or pass through the param connection_pool=none. This is preferable for simple commands, low in number, as there is less connection management overhead. You can alternatively use a pipeline in the context of a connection pool to serialise a batch operation.
https://redis-py.readthedocs.io/en/latest/#redis.ConnectionPool
https://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
I did more experiments on this. It seems there is an issue related to the delay. the below modification will make it work:
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
time.sleep(5) # add the delay needed to the let the set finish
print redis.get("test") # print "{test1: test}" here
return "now it works"
I read the documentation on redis, redis seems to be single threaded. So I am not sure why it will execute the get function call before the set function is done. Someone with more experience please post an explanation.
I'm trying to learn flask technology stack and for my application I'm using Flask-SQLAlchemy. Everything works perfect, but I'm struggling with writing integration tests. I don't want to use SQLite since on production I'm using PostgreSQL and putting tons of mocks will actually more test my own implementation not the logic itself.
So, after some research I decided to implement tests that will write data in the test database and after each tests rollback the changes (for performance sake). Actually, I'm trying to implement something similar to this approach: http://sontek.net/blog/detail/writing-tests-for-pyramid-and-sqlalchemy.
My problem is creating correct transaction and being able to rollback it. Here is the code of my base class:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MyAppIntegrationTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2:///db_test'
init_app()
db.app = app
db.create_all(app=app)
#classmethod
def tearDownClass(cls):
db.drop_all(app=app)
def setUp(self):
db.session.rollback()
self.trans = db.session.begin(subtransactions=True)
def tearDown(self):
self.trans.rollback()
When I'm trying to execute tests I got a following error:
Traceback (most recent call last):
File "myapp/src/core/tests/__init__.py", line 53, in tearDown
self.trans.rollback()
File "myapp/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 370, in rollback
self._assert_active(prepared_ok=True, rollback_ok=True)
File "myapp/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 203, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
ResourceClosedError: This transaction is closed
I bet that this is the problem with scoped_session and that when I'm running tests it reuse one global session for all tests, but my knowledge in SQLAlchemy is not deep enough yet.
Any help will be highly appreciated!
Thanks!
You're tearDownClass and setUpClass are causing the issues.
The setUpClass is called once before all the tests, and the tearDownClass is after all the tests in the class.
So if you have 3 tests.
setUpClass is called
setUp is called
tearDown is called (You rollback, but you don't begin a session, this throws an error)
setUp is called (another rollback that's going to error)
etc...
Add a db.session.begin to your tearDown and you'll be fine.
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class MyAppIntegrationTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2:///db_test'
init_app()
db.app = app
db.create_all(app=app)
#classmethod
def tearDownClass(cls):
db.drop_all(app=app)
def setUp(self):
db.session.rollback()
self.trans = db.session.begin(subtransactions=True)
def tearDown(self):
self.trans.rollback()
db.session.begin()
I wrote a blog post on how to set this up... in short, you have to create a nested transaction so that any session.commit() calls inside your application don't break your isolation. Then apply a listener to the inner transaction to restart it anytime someone tries to commit it or roll it back. Setup Flask-Sqlalchemy Transaction Test Case
A possible solution to your question:
If data size of your database is not very large and you want keep data unchanged, you can do a backup (by write straight sql sentenses) in set up
"CREATE TABLE {0}_backup SELECT * FROM {0}".format(table_name)
and do recover in teardown
"DROP TABLE {0}".format(table_name)
"RENAME TABLE {0}_backup TO {0}".format(table_name)