Reuse function as pytest fixture - python

I have a function in my code that is being used by fastapi to provide a db session to the endpoints:
def get_db() -> Generator[Session, None, None]:
try:
db = SessionLocal()
yield db
finally:
db.close()
I want to use the same function as a pytest fixture. If I do something like the following, the fixture is not being recognized:
pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...
test_item_create throws an error about db not being a fixture: fixture 'db' not found.
So I can rewrite get_db in my conftest.py and wrap it with pytest.fixture and get things working, but I was wondering if there's a better way of reusing existing functions as fixtures. If I have more helper functions like get_db, it'd be nice not to have rewrite them for tests.

I think pytest cannot find the fixture as things are written in your example. Maybe you are trying to get to something like this?
db = pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...

Related

Parallelize integration tests with pytest-xdist

I was wondering if it is possible to use an in memory SQLite database to perform integration tests in parallel using pytest and pytrest-xdist on a FastAPI application?
Update
I have a good number of tests that I would like to run during my CI (GitLab CI), however, due to the number of IOPS that need to be executed for each test when using a file for SQLite the job times out so I would like to use an in-memory database, as well as parallelize the tests using pytest-xdist.
Every endpoint uses FastAPI's dependency injection for the db context, and what I have tried is to create a fixture for the app as so:
#pytest.fixture(scope="function")
def app():
"""
Pytest fixture that creates an instance of the FastAPI application.
"""
app = create_app()
app.dependency_overrides[get_db] = override_get_db
return app
def override_get_db():
SQLALCHEMY_DATABASE_URL = f"sqlite:///:memory:"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
TestLocalSession = sessionmaker(autocommit=False, autoflush=False, bind=engine)
init_db(session=TestLocalSession)
engine.execute("PRAGMA foreign_keys=ON;")
try:
db = TestLocalSession()
yield db
finally:
db.close()
Because the endpoints are all sync I also need to use httpx instead of the built in TestClient:
#pytest.fixture(scope='function')
async def client(app):
"""
Pytest fixture that creates an instance of the Flask test client.
"""
async with AsyncClient(
app=app, base_url=f"{settings.BASE_URL}{settings.API_PREFIX}"
) as client:
yield client
The issue I have when I run this test (without pytest-xdist) is that the database is being created in a seperate thread as that which is being injected into the endpoints so I always get a SQL error: sqlite3.OperationalError: no such table: certification
Any suggestion on how to solve this? Thanks.

How do I patch a class attribute in pytest?

I have a service class that connections to AWS S3. The connection uses boto3 within the __init__() method. I would like to mock this to use a moto s3 instance I've defined in a fixture, but I just can't get the mock to do anything.
Let's say I have a service class that looks like this:
import boto3
class S3Storage:
def __init__(self):
self._s3 = boto3.resource('s3')
def do_download(self):
self._s3 .download_file(
Key='file.txt',
Bucket='mybucket',
Filename='path/to/destination/file.txt',
)
and then I create a conftest file that has these moto fixtures:
# Fixtures
#pytest.fixture(scope='function')
def mocked_s3r():
with mock_s3():
yield boto3.resource('s3')
#pytest.fixture(scope='function')
def mocked_s3client():
with mock_s3():
yield boto3.client('s3')
#pytest.fixture(scope='function', autouse=True)
def upload_s3_resources(mocked_s3client, s3files):
mocked_s3client.create_bucket(Bucket='mybucket')
mocked_s3client.upload_file(
Filename='path/to/destination/file.txt',
Bucket='mybucket',
Key='file.txt',
)
The bottom fixture will grab a local file and place it in the moto s3 instance, which can be accessed from the mocked_s3r client mock.
My problem is that I cannot make a successful patch for the S3Storage._s3 attribute that holds the boto resource (I know I'm mixing boto clients and resources here, but I don't think that's causing the issue).
So I tried writing some fixtures to patch (using pytest-mock) or monkeypatch the boto resource and/or client.
# This is what I can't make work...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, mocker):
mocker.patch('app.utils.s3_storage.boto3.resource', return_value=mocked_s3r)
return mocked_s3client
# This other approach also doesn't work...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, mocker):
mocker_s3storage = mocker.patch('app.utils.s3_storage.boto3.resource')
mocker_s3storage()._s3 = mocked_s3client
return mocked_s3client
# Nor this...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, monkeypatch):
monkeypatch.setattr('app.utils.s3_storage.S3Storage._s3', mocked_s3client)
return mocked_s3client
But nothing works. I think I might be fundamentally misunderstanding how to patch an attribute that belongs to an instance of a class.
I'd rather do all this in a fixture, not in each individual test, such that I can write a test like:
def test_download_file(mocked_s3client):
s3storage = S3Storage()
s3storage._s3 # This should be a mock object, but it just connects to the real AWS
s3storage.do_download()
and I don't have to specify the mock each time.
It is not necessary to patch the client, in order for Moto to work. As long as clients/resources are created while the mock is active, they are automatically patched.
Using your example fixtures, the following test works:
def test_file_exists(upload_s3_resources):
S3Storage().do_download()
test_file_exists() # todo: actually verify somethign happened
Note that the download-file call in your logic should be slightly modified, but I'm assuming that was just a example to keep things simple.
I had to change to the following to get the test to succeed:
self._s3.Bucket('mybucket').download_file('file.txt', 'test.txt')

FastAPI dependencies (yield): how to call them manually?

FastAPI uses Depends() to inject variables either returned or yielded. Eg, FastAPI/SQL:
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
...
def create_user(db: Session = Depends(get_db)):
...
If I wanted to use that get_db() somewhere else (outside a FastAPI route), how would I do that? I know it's Python core knowledge, but I can't seem to figure it out. My initial thought was db = yield from get_db(), but I can't call yield from in async functions (and don't know if it would work besides). Then I tried:
with get_db() as db:
pass
Which fails as the original get_db() isn't wrapped as a #contextmanager. (Note, I don't want to decorate this - I'm using get_db as an example, I need to work with more complicated dependencies). Finally, I tried db = next(get_db()) - which works, but I don't think that's the correct solution. When/how will finally be invoked - when my method returns? And in some other dependencies, there's post-yield code that needs to execute; would I need to call next() again to ensure that code executes? Seems like next() isn't the right way. Any ideas?
You can use contextmanager not as a decorator but as a function returning context manager:
from contextlib import contextmanager
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
# synchronously
with contextmanager(get_db)() as session: # execute until yield. Session is yielded value
pass
# execute finally on exit from with
But keep in mind that the code will execute synchronously. If you want to execute it in a thread, then you can use the FastAPI tools:
import asyncio
from contextlib import contextmanager
from fastapi.concurrency import contextmanager_in_threadpool
async def some_coro():
async with contextmanager_in_threadpool(contextmanager(get_db)()) as session:
pass

Running a single test works, but running multiple tests fails - Flask and Pytest

This is really strange. I have the following simple flask application:
- root
- myapp
- a route with /subscription_endpoint
- tests
- test_az.py
- test_bz.py
test_az.py and test_bz.py look both the same. There is a setup (taken from https://diegoquintanav.github.io/flask-contexts.html) and then one simple test:
import pytest
from myapp import create_app
import json
#pytest.fixture(scope='module')
def app(request):
from myapp import create_app
return create_app('testing')
#pytest.fixture(autouse=True)
def app_context(app):
"""Creates a flask app context"""
with app.app_context():
yield app
#pytest.fixture
def client(app_context):
return app_context.test_client(use_cookies=True)
def test_it(client):
sample_payload = {"test": "test"}
response = client.post("/subscription_endpoint", json=sample_payload)
assert response.status_code == 500
running pytest, will run both files, but test_az.py will succeed, while test_bz.py will fail. The http request will return a 404 error, meaning test_bz cannot find the route in the app.
If I run them individually, then they booth succeed. This is very strange! It seems like the first test is somehow influencing the second test.
I have added actually a third test test_cz.py, which will fail as well. So only the first one will ever run. I feel like this has something todo with those fixtures, but no idea where to look.
Create a conftest.py for fixtures e.g. for client fixture and use the same fixture in both tests?
Now if you're saying that the provided code is the example of a test that is the same in another file, then you are creating 2 fixtures for a client. I would first clean it up and create a 1 conftest.py that contains all the fixtures and then use them in your tests this might help you.
Check out also how to use pytest as described in Flask documentation

Pytest and database cleanup before running tests

i am using Flask to build a web service and pytest for testing
i am using pytest fixtures to set up and tear down the test resources but i need to test a POST endpoint that will create some records in the database
How do we clean up these records ?
You can use a fixture to do that cleanup.
#pytest.fixture
def cleanup():
yield
# This is executed when the test using the fixture is done
db_cleanup()
def test_records_created(cleanup): # pylint: disable=redefined-outer-name,unused-argument
response = app.test_client().post('/path', json=payload)
assert response.status_code == 200
assert ...

Categories

Resources