I have two databases: one is master and for testing, I have another database master_test
To retrieve a total price from a table I have created a #classmethod in a model. This method helps me get the sum of the price filtering by month and year. Here is the class method:
#classmethod
def get_total_book_price(cls, id):
query = Book.query.with_entities(
func.sum(Book.price).label("price")
).filter(
extract('year', Book.created_at) >= datetime.date.today().year,
extract('month', Book.created_at) >= datetime.date.today().month
).filter(
Book.id == id
).all()
return query[0].price
This query works nicely. But when I run this for test case its showing master database does not exist. It should find the master_test database instead of the master database.
Here is the test code:
def test_get_total_book_price(self):
id = 1
response = Book.get_total_book_price(id)
if not response:
self.assertEqual(response, False)
self.assertEqual(response, True)
It's showing the ERROR:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: database "master"
does not exist
(Background on this error at: http://sqlalche.me/e/e3q8)
----------------------------------------------------------------------
Ran 34 tests in 2.011s
FAILED (errors=1)
ERROR: Job failed: exit code 1
Some other test cases are working nicely with master_test. But for this test why it is looking for master database ?
You have to provide a context for your test function. The best way to do this is with factories. An excellent description how to do this is: http://alanpryorjr.com/2019-05-20-flask-api-example/ . If you have an app and a db fixture, you can just use it in your test function:
from test.fixtures import app, db
def test_get_total_book_price(self, db):
id = 1
response = Book.get_total_book_price(id)
if not response:
self.assertEqual(response, False)
self.assertEqual(response, True)
Yes, the only difference is the db in the function call. I cannot say why your other tests are working. My best guess is that your failing test is executed after the test context is destroyed. Better be explicit about your app context every single time.
If everything else fails (and you have access to your app with the right database connection) you could push the context manually with:
from foo import app
def test_get_total_book_price(self, app):
app.app_context().push()
id = 1
response = Book.get_total_book_price(id)
if not response:
self.assertEqual(response, False)
self.assertEqual(response, True)
I want to stress that you should use factories for your testing.
Reference: https://flask.palletsprojects.com/en/1.0.x/appcontext/
Related
This is my flask unit test setup, I launch an app_instance for all tests and rollback for each function to make sure the test DB is fresh and clean.
#fixture(scope="session", autouse=True)
def app_instance():
app = setup_test_app()
create_test_user_records()
return app
#commit_test_data
def create_test_user_records():
db.session.add_all([Test_1, Test_2, Test_3])
#fixture(scope="function", autouse=True)
def enforce_db_rollback_for_all_tests():
yield
db.session.rollback()
def commit_test_data(db_fn):
#functools.wraps(db_fn)
def wrapper():
db_fn()
db.session.commit()
return wrapper
They work quite well until one day I want to add an API test.
def test_admin(app_instance):
test_client = app_instance.test_client()
res = test_client.get("/admin")
# Assert
assert res.status_code == 200
The unit test itself worked fine and passed, however, it broke other unit tests and threw out error like
sqlalchemy.orm.exc.DetachedInstanceError: Instance <User at 0x115a78b90> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/14/bhk3)
I have test client which is implemented using #pytest.fixture client. In test client I have all my database tables/modals. Inside the test script block, I want to be able to write log records to real database tables, not to test client tables. Test script works fine, however there is no change at the real database tables.
Here is the simplified example:
#pytest.fixture
def client():
tables = [
User,
Profile
]
with db.atomic():
db.drop_tables(tables)
db.create_tables(tables)
with app.test_client() as client:
yield client
with db.atomic():
db.drop_tables(tables)
def test_script(client):
with db.atomic():
User.create(
name = "example",
surname = "example"
)
assert True == True
Additionally, I don't does it matter but I am using PeeWee as ORM and SQLite for storage.
I think I am misunderstanding how dependency injection is used in FastAPI, specifically in the context of DB sessions.
My current set up is FastAPI, SqlAlchhemy & Alembic, although i am writing the raw SQL myself, pydantic etc, pretty straight forward.
I have basic CRUD routes which communicate directly to my repository layer and all is working. In these methods I am able to successfully use the DB dependency injection. See example code below:
Dependencies
def get_database(request: Request) -> Database:
return request.app.state._db
def get_repository(Repo_type: Type[BaseRepository]) -> Callable:
def get_repo(db: Database = Depends(get_database)) -> Type[BaseRepository]:
return Repo_type(db)
return get_repo
Example GET by ID Route
#router.get("/{id}/", response_model=TablePub, name="Get Table by id")
async def get_table_by_id(
id: UUID, table_repo: TableRepository = Depends(get_repository(TableRepository))
) -> TableInDB:
table = await table_repo.get_table_by_id(id=id)
if not table:
raise HTTPException(status_code=HTTP_404_NOT_FOUND, detail="No Table found with that id.")
return table
Corresponding Repository
from databases import Database
class BaseRepository:
def __init__(self, db: Database) -> None:
self.db = db
class TableRepository(BaseRepository):
async def get_table_by_id(self, *, id: UUID) -> TableInDB:
table = await self.db.fetch_one(
query=GET_TABLE_BY_ID_QUERY,
values={"id": id},
)
if not table:
return None
return TableInDB(**table)
Now I want to start doing some more complex operations and want to add a service layer to house all of the business logic.
What is the correct way to structure this so that i can reuse the repositories that i have already written? For example, i want to return all Sales for a Table, but i need to get the table number from the DB first before i can query the Sales Table. The route requires table_id to be passed in as a param -> service layer, where i fetch the table by ID (Using existing repo) -> from that object, get the table number, then do a request to an external API that requires the table number as a param.
What I have so far:
Route
#router.get("/{table_id}", response_model=SalesPub, name="Get Sale Entries by table id")
async def get_sales_by_table_id(
table_id: UUID = Path(..., title="ID of the Table to get Sales Entries for")):
response = await SalesService.get_sales_from_external_API(table_id=table_id)
return response
Service Layer 'SalesService'
async def get_sales_from_external_API(
table_id: UUID,
table_repo: TableRepository = Depends(get_repository(TableRepository))
) -> TableInDB:
table_data = await table_repo.get_table_by_id(id=table_id)
if table_data is None:
logger.info(f"No table with id:{table_id} could not be found")
table_number = table_data.number
client_id = table_data.client_id
sales = await salesGateway.call_external_API(table_number, client_id)
return sales
The code brakes here table_data = await table_repo.get_table_by_id(id=table_id)
With an error AttributeError: 'Depends' object has no attribute 'get_table_by_id'
What i don't understand is that the code is almost identical to the route method that can get the table by ID? The depends object TableRepository does have a get_table_by_id method. What is it that i'm doing incorrectly, and is this the best way to split up business logic from database actions?
Thanks in advance
I seem to have found a solution to this, although i'm not sure if it is the best way.
The Depends Module only works on FastAPI routes and Dependencies. I was trying to use it on a regular function.
I needed to make the parameter table_repo an instance of Depends. and pass it in as a parameter to the external API call function.
#router.get("/table/{table_id}/", response_model=SalePub, name="Get Sales by table id")
async def get_sales_by_table_id(
table_id: UUID = Path(..., title="ID of the Table to get Sales Entries for"),
table_repo: TableRepository = Depends(get_repository(TableRepository))):
response = await get_sales_entries_from_pos(table_id=table_id, table_repo=table_repo)
return response
The issue i am foreseeing is that if i have a large service that may need access to manny repos, i have to give that access on the router through Depends, which just seems a bit strange to me.
I write some tests with pytest, I want to test create user and email with post method.
With some debug, I know the issue is I open two databases in memory, but they are same database SessionLocal().
So how can I fix this, I try db.flush(), but it doesn't work.
this is the post method code
#router.post("/", response_model=schemas.User)
def create_user(
*,
db: Session = Depends(deps.get_db), #the get_db is SessionLocal()
user_in: schemas.UserCreate,
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Create new user.
"""
user = crud.user.get_by_email(db, email=user_in.email)
if user:
raise HTTPException(
status_code=400,
detail="The user with this username already exists in the system.",
)
user = crud.user.create(db, obj_in=user_in)
print("====post====")
print(db.query(models.User).count())
print(db)
if settings.EMAILS_ENABLED and user_in.email:
send_new_account_email(
email_to=user_in.email, username=user_in.email, password=user_in.password
)
return user
and the test code is:
def test_create_user_new_email(
client: TestClient, superuser_token_headers: dict, db: Session # db is SessionLocal()
) -> None:
username = random_email()
password = random_lower_string()
data = {"email": username, "password": password}
r = client.post(
f"{settings.API_V1_STR}/users/", headers=superuser_token_headers, json=data,
)
assert 200 <= r.status_code < 300
created_user = r.json()
print("====test====")
print(db.query(User).count())
print(db)
user = crud.user.get_by_email(db, email=username)
assert user
assert user.email == created_user["email"]
and the test result is
> assert user
E assert None
====post====
320
<sqlalchemy.orm.session.Session object at 0x7f0a9f660910>
====test====
319
<sqlalchemy.orm.session.Session object at 0x7f0aa09c4d60>
Your code does not provide enough information to help you, the key issues are probably in what is hidden and explained by your comments.
And it seems like you are confusing sqlalchemy session and databases. If you are not familiar with these concepts, I highly recommend you to have a look at SQLAlchemy documentation.
But, looking at your code structure, it seems like you are using FastAPI.
Then, if you want to test SQLAlchemy with pytest, I recommend you to use pytest fixture with SQL transactions.
Here is my suggestion on how to implement such a test. I'll suppose that you want to run the test on your actual database and not create a new database especially for the tests. This implementation is heavily based on this github gist (the author made a "feel free to use statement", so I suppose he is ok with me copying his code here):
# test.py
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
from fastapi.testclient import TestClient
from myapp.models import BaseModel
from myapp.main import app # import your fastapi app
from myapp.database import get_db # import the dependency
client = TestClient(app)
# scope="session" mean that the engine will last for the whole test session
#pytest.fixture(scope="session")
def engine():
return create_engine("postgresql://localhost/test_database")
# at the end of the test session drops the created metadata using fixture with yield
#pytest.fixture(scope="session")
def tables(engine):
BaseModel.metadata.create_all(engine)
yield
BaseModel.metadata.drop_all(engine)
# here scope="function" (by default) so each time a test finished, the database is cleaned
#pytest.fixture
def dbsession(engine, tables):
"""Returns an sqlalchemy session, and after the test tears down everything properly."""
connection = engine.connect()
# begin the nested transaction
transaction = connection.begin()
# use the connection with the already started transaction
session = Session(bind=connection)
yield session
session.close()
# roll back the broader transaction
transaction.rollback()
# put back the connection to the connection pool
connection.close()
## end of the gist.github code
#pytest.fixture
def db_fastapi(dbsession):
def override_get_db():
db = dbsession
try:
yield db
finally:
db.close()
client.app.dependency_overrides[get_db] = override_get_db
yield db
# Now you can run your test
def test_create_user_new_email(db_fastapi):
username = random_email()
# ...
I am using moto to test aws functionality in my codebase. One of the issues I have ran into is that when testing athena, the query status stayed in "QUEUED" indefinitely, causing the test to fail or time out.
Here is the method to be tested:
import time
import boto3
class Athena:
CLIENT = boto3.client("athena")
class QueryError(Exception):
"""A class for exceptions related to queries."""
#classmethod
def execute_query(cls, query, result_location, check_status=True,
time_limit=10):
"""
Execute a query in Athena.
"""
_result_configuration = {"OutputLocation": result_location}
_kwargs = {"QueryString": query, "ResultConfiguration":
_result_configuration}
response = cls.CLIENT.start_query_execution(**_kwargs)
query_id = response["QueryExecutionId"]
if check_status:
old_time = time.time()
while True:
status = cls.CLIENT.get_query_execution(
QueryExecutionId=query_id)
status = status["QueryExecution"]["Status"]["State"]
if status in ["SUCCEEDED", "FAILED", "CANCELLED"]:
if status == "FAILED":
raise cls.QueryError("error")
break
time.sleep(0.2) # 200ms
if time.time() - old_time > time_limit and status
== "QUEUED":
raise cls.QueryError("time limit reached")
return query_id
Here is the fixture passed into the test
from moto.s3 import mock_s3
import boto3
#pytest.fixture
def s3():
with mock_s3():
s3 = boto3.client("s3")
yield s3
Here is the test (keep in mind you need to change from x to the module with the above method)
import uuid
import boto3
import pytest
from moto.athena import mock_athena
from moto.s3 import mock_s3
#mock_s3
#mock_athena
def test_execute_query_check(s3):
from x import Athena
"""
Test for 'execute_query' (with status check)
"""
CLIENT = s3
bucket_name = "pytest." + str(uuid.uuid4())
# Bucket creation
bucket_config = {"LocationConstraint": "us-east-2"}
CLIENT.create_bucket(Bucket=bucket_name,
CreateBucketConfiguration=bucket_config)
waiter = CLIENT.get_waiter("bucket_exists")
waiter.wait(Bucket=bucket_name)
s3_location = f"s3://{bucket_name}/"
query = "SELECT current_date, current_time;"
query_id = Athena.execute_query(query, s3_location,
check_status=True)
assert query_id
This test fails because moto does not change the status of the query past "QUEUED" and the test is expecting a changed to state otherwise it triggers an exception.
I would like to be able to do something like:
from moto.athena import athena_backends
athena_backends['us-east-2'].job_flows[query_id].state = "SUCCEEDED"
as was suggested in this issue: https://github.com/spulec/moto/issues/380
However the "job flows" attribute does not seem to exist anymore on the boto3 mapreduce backend, and I cant find a method to explicitly change it.
Ideally this would be able to happen somewhere in the test to manually change the state of the query to simulate how it would be with actual resources.
State can be accessed and changed as follows:
athena_backends['us-east-2'].executions.get(query_id).status
Sample code snippet
from moto.athena import athena_backends
query = "SELECT stuff"
location = "s3://bucket-name/prefix/"
database = "database"
# Start Query
exex_id = self.client.start_query_execution(
QueryString=query,
QueryExecutionContext={"Database": database},
ResultConfiguration={"OutputLocation": location},
)["QueryExecutionId"]
athena_backends['us-west-2'].executions.get(exex_id).status = "CANCELLED"
It seems to me that moto only returns QUEUED for the start_query_execution, you can take a look at the source code here.
Another approach is using from unittest import mock, and then you can do something like:
cls.CLIENT = mock.Mock()
cls.CLIENT.start_query_execution.side_effect = [
'QUEUED',
'SUCCEEDED'
]
So then, the first time cls.CLIENT.start_query_execution(..) is called, it will return that the query is queued, but the second time will return that succeed, and then you will be able to test both path executions.
And also, with moto it won't be able to test all the cases, because apart from queued status, you only can set the query status to CANCELLED, as you can see here.