FastAPI dependencies (yield): how to call them manually? - python

FastAPI uses Depends() to inject variables either returned or yielded. Eg, FastAPI/SQL:
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
...
def create_user(db: Session = Depends(get_db)):
...
If I wanted to use that get_db() somewhere else (outside a FastAPI route), how would I do that? I know it's Python core knowledge, but I can't seem to figure it out. My initial thought was db = yield from get_db(), but I can't call yield from in async functions (and don't know if it would work besides). Then I tried:
with get_db() as db:
pass
Which fails as the original get_db() isn't wrapped as a #contextmanager. (Note, I don't want to decorate this - I'm using get_db as an example, I need to work with more complicated dependencies). Finally, I tried db = next(get_db()) - which works, but I don't think that's the correct solution. When/how will finally be invoked - when my method returns? And in some other dependencies, there's post-yield code that needs to execute; would I need to call next() again to ensure that code executes? Seems like next() isn't the right way. Any ideas?

You can use contextmanager not as a decorator but as a function returning context manager:
from contextlib import contextmanager
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
# synchronously
with contextmanager(get_db)() as session: # execute until yield. Session is yielded value
pass
# execute finally on exit from with
But keep in mind that the code will execute synchronously. If you want to execute it in a thread, then you can use the FastAPI tools:
import asyncio
from contextlib import contextmanager
from fastapi.concurrency import contextmanager_in_threadpool
async def some_coro():
async with contextmanager_in_threadpool(contextmanager(get_db)()) as session:
pass

Related

Reuse function as pytest fixture

I have a function in my code that is being used by fastapi to provide a db session to the endpoints:
def get_db() -> Generator[Session, None, None]:
try:
db = SessionLocal()
yield db
finally:
db.close()
I want to use the same function as a pytest fixture. If I do something like the following, the fixture is not being recognized:
pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...
test_item_create throws an error about db not being a fixture: fixture 'db' not found.
So I can rewrite get_db in my conftest.py and wrap it with pytest.fixture and get things working, but I was wondering if there's a better way of reusing existing functions as fixtures. If I have more helper functions like get_db, it'd be nice not to have rewrite them for tests.
I think pytest cannot find the fixture as things are written in your example. Maybe you are trying to get to something like this?
db = pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...

How to close aiohttp.ClientSession automatically before program ends from a library POV?

I'm building an async library with aiohttp. The library has a single client that on instantiation creates a ClientSession and uses it to make requests to an API (it's an REST API wrapper)
The problem i'm facing is how to cleanly close the client session on exit?
If the session is not explicitly closed a whole lot of errors come out but i can't simply use context managers to close the session since i don't know when the program will end.
A tipical use would be this:
from mylibrary import Client
client = Client()
async main():
await client.get_foo(...)
await client.patch_bar(...)
asyncio.run(main())
I could add await client.close_session() on main but I want to remove this responsability from the enduser so ideally the client would automatically close the ClientSession when the program ends.
How can I do this?
I have tried using __del__ on the client to get the loop and close the session without success as well as using the atexit library, but it seems that by the time these run the asyncio loop has already been destroyed and I still get the warnings.
The specific error is:
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x0000013ACFD54AF0>
transport: <_ProactorSocketTransport fd=1052 read=<_OverlappedFuture cancelled>>
I did some research on this error and google seems to think it's because I need to implement flow control, I have however and this error only occurs if I don't explicitly close the session.
Unfortunately, it seems like the only clean pattern that can apply there is to make your client itself an (async) context manager, and require that your users use it in a with block.
The __del__ method could work in some cases - but it would require that code from your users would not "leak" the Client instance itself.
so, the code is trivial - the burden on your users is not zero:
class Client:
...
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_value, tb):
await self.close_session()
Creating a pseudo-hook on loop.stop:
Another way, though not "clean" and not guaranteed to work, could be to decorate the running loop stop function to add a call to close_session.
If the user code just "halts" and does not tear down the loop properly, this can't help anyway - but I guess it might be an option for "well behaved" users.
The big problem here is this is not documented - but taking a pick on asyncio internals, it looks it always will go through self.stop().
import asyncio
class ShutDownCb:
def __init__(self, cb):
self.cb = cb
self.stopping = False
loop = self.loop = asyncio.get_running_loop()
self.original_stop = loop.stop
loop.stop = self.new_stop
async def _stop(self):
self.task.result()
return self.original_stop()
def new_stop(self):
if not self.stopping:
self.stopping = True
self.task = asyncio.create_task(self.cb())
asyncio.create_task(self._stop())
return
return self.original_stop()
class Client:
def __init__(self, ...):
...
ShutDownCb(self.close_session)

'async_generator is not a callable object' FastAPI dependency issue app

I am trying to create a FastAPI and async sqlalchemy.
The get_db dependency causes a weird TypeError: <async_generator object get_db at 0x7ff6d9d9aa60> is not a callable object issue.
Here's my code:
db.py
from typing import Generator
from .db.session import SessionLocal
async def get_db() -> Generator:
try:
db = SessionLocal()
yield db
finally:
await db.close()
session.py
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from .core.config import settings
engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI,
pool_pre_ping=True
)
SessionLocal = AsyncSession(
autocommit=False,
autoflush=False,
bind=engine
)
I followed almost of the instructions posted here: https://6060ff4ffd0e7c1b62baa6c7--fastapi.netlify.app/advanced/sql-databases-sqlalchemy/#more-info
I have figured this out, basically when you call the generator get_db() as a dependency for a FastAPI endpoint, you basically just call it as get_db without the parenthesis.
For example:
from typing import List, Any
from fastapi import APIRouter, HTTPException, Depends, status
from sqlalchemy.ext.asyncio import AsyncSession
from . import models, crud, schemas
from .deps.db import get_db
router = APIRouter()
#router.post('/',
response_model=schemas.StaffAccount,
status_code=status.HTTP_201_CREATED)
async def create_staff_account(
db: AsyncSession = Depends(get_db),
staff_acct: schemas.StaffAccountCreate = Depends(schemas.StaffAccountCreate)
) -> Any:
q = await crud.staff.create(db=db, obj_in=staff_acct)
if not q:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='An error occurred while processing your request')
return q
This is such a minor detail, that can get in the way of some beginners (like me). So please look more closely at your code.
The problem is
engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI,
pool_pre_ping=True
)
You are filling engine with a promise that has to be fulfilled yet. Basically the async functionality allows you to go on with the code while some I/O or networking stuff is still pending.
So, you are passing the engine as parameter, although the connection may not have been established yet.
You should await for the return of the engine before using it as a parameter for other functions.
Here's some more information about the async functionality of python
https://www.educba.com/python-async/

Twisted and nested Deferred with inline callbacks in crossbar.io

I'm relatively new to Twisted and crossbar.io and I'm currently working on some database abstractions using sqlalchemy and alchimia (a layer to use sqlalchemy with twisted). The db abstractions I build so far are working as expected but I get problems when making asynchronous db calls inside my crossbar procedures. I guess that is because i have nested asynchronous calls with the outer call being some crossbar remote procedure and the inner being the database access.
What i want to do is the following:
#inlineCallbacks
def onJoin(self, details):
...
def login(user, password): # <-- outer call
...
db_result = yield db.some_query(user, password) # <-- inner call
for row in db_result: # <-- need to wait for db_result
return 'authentication_ticket'
return 'error'
To be able to authenticate the user i have to wait for the data from the db and then either issue a valid ticket or return an error. Currently I get an error that i cannot iterate over an deferred, because i don't wait for the db query to be finished.
Now, how do i wait inside my login RPC for the inner db call, then authenticate the user and then return. I know it is possible to chain asynchronous calls in twisted, but i don't know how to do this in this special case with crossbar and when my outer function uses the #inlineCallbacks shortcut.
Update:
Tinkering around I now can add a callback to the db query. Unfortunately now I don't know how to pass a return value from the inner function to the outer function (the outer function needs to return the ticket the user gets authenticated with):
#inlineCallbacks
def onJoin(self, details):
...
def login(user, password): # <-- outer call
db_result = db.some_query(user, password) # <-- inner call
def my_callback(query_result):
if not query_result.is_empty():
return 'user_ticket' # <-- needs to be returned by outer method
db_result.addCallback(my_callback)
return my_callback_result # <-- need something like that
I tried defer.returnValue('user_ticket') inside my callback function but this gives me an error.
Your problem appears to be that while login is expecting to yield Deferreds, it isn't decorated with #inlineCallbacks.

SQLAlchemy+Tornado: How to create a scopefunc for SQLAlchemy's ScopedSession?

Using tornado, I want to create a bit of middleware magic that ensures that my SQLAlchemy sessions get properly closed/cleaned up so that objects aren't shared from one request to the next. The trick is that, since some of my tornado handlers are asynchronous, I can't just share one session for each request.
So I am left trying to create a ScopedSession that knows how to create a new session for each request. All I need to do is define a scopefunc for my code that can turn the currently executing request into a unique key of some sort, however I can't seem to figure out how to get the current request at any one point in time (outside of the scope of the current RequestHandler, which my function doesn't have access to either).
Is there something I can do to make this work?
You might want to associate the Session with the request itself (i.e. don't use scopedsession if it's not convenient). Then you can just say, request.session. Still needs to have hooks at the start/end for setup/teardown.
edit: custom scoping function
def get_current_tornado_request():
# TODO: ask on the Tornado mailing list how
# to acquire the request currently being invoked
Session = scoped_session(sessionmaker(), scopefunc=get_current_tornado_request)
(This is a 2017 answer to a 2011 question) As #Stefano Borini pointed out, easiest way in Tornado 4 is to just let the RequestHandler implicitly pass the session around. Tornado will track the handler instance state when using coroutine decorator patterns:
import logging
_logger = logging.getLogger(__name__)
from sqlalchemy import create_engine, exc as sqla_exc
from sqlalchemy.orm import sessionmaker, exc as orm_exc
from tornado import gen
from tornado.web import RequestHandler
from my_models import SQLA_Class
Session = sessionmaker(bind=create_engine(...))
class BaseHandler(RequestHandler):
#gen.coroutine
def prepare():
self.db_session = Session()
def on_finish():
self.db_session.close()
class MyHander(BaseHandler):
#gen.coroutine
def post():
SQLA_Object = self.db_session.query(SQLA_Class)...
SQLA_Object.attribute = ...
try:
db_session.commit()
except sqla_exc.SQLAlchemyError:
_logger.exception("Couldn't commit")
db_session.rollback()
If you really really need to asynchronously reference a SQL Alchemy session inside a declarative_base (which I would consider an anti-pattern since it over-couples the model to the application), Amit Matani has a non-working example here.

Categories

Resources