I'm developing a web app with Flask-SQLAlchemy backed by a SQLite database. I need to call a method (create_collation) right after connecting. Without the SQLAlchemy framework, I can do that like this:
conn = sqlite3.connect(path)
conn.create_collation('my_collate', my_collate)
# ... go on and do fancy "order_by" stuff.
How do I do that in Flask-SQLAlchemy? Based on the API I was thinking of the following, but I get AttributeError: 'Engine' object has no attribute 'create_collation'.
from flask_sqlalchemy import SQLAlchemy
class MySQLAlchemy(SQLAlchemy):
def create_engine(self, sa_url, engine_opts):
engine = super().create_engine(sa_url, engine_opts)
engine.create_collation('my_collate', self.my_collate)
return engine
#staticmethod
def my_collate(string1, string2):
return string1.locateCompare(string2)
Following the SQLAlchemy docs I think I need to get the connection rather than the engine. But I can't find out how.
Also, where should this go specifically in Flask-SQLAlchemy? What part ultimately "connect"s, and how do I tune into that?
SQLAlchemy has an Events API that allows you to create a function that will be called whenever the connection pool creates a new connection:
from sqlalchemy.event import listens_for
from sqlalchemy.pool import Pool
#listens_for(Pool, "connect")
def my_on_connect(dbapi_con, connection_record):
dbapi_con.create_collation('my_collate', my_collate)
Related
I am developing a fastapi server using sqlalchemy and asyncpg to work with a postgres database. For each request, a new session is created (via fastapi dependency injection, as in the documentation). I used sqlite+aiosqlite before postgres+asyncpg and everything worked perfectly. After I switched from sqlite to postgres, every fastapi request crashed with the error:
sqlalchemy.dialects.postgresql.asyncpg.InterfaceError - cannot perform operation: another operation is in progress
This is how I create the engine and sessions:
from typing import Generator
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
user = os.getenv('PG_USER')
password = os.getenv('PG_PASSWORD')
domain = os.getenv('PG_DOMAIN')
db = os.getenv('PG_DATABASE')
# db_async_url = f'sqlite+aiosqlite:///database.sqlite3'
db_async_url = f'postgresql+asyncpg://{user}:{password}#{domain}/{db}'
async_engine = create_async_engine(
db_async_url, future=True, echo=True
)
create_async_session = sessionmaker(
async_engine, class_=AsyncSession, expire_on_commit=False
)
async def get_async_session() -> Generator[AsyncSession]:
async with create_async_session() as session:
yield session
The error disappeared after adding poolclass=NullPool to create_async_engine, so here's what engine creation looks like now:
from sqlalchemy.pool import NullPool
...
async_engine = create_async_engine(
db_async_url, future=True, echo=True, poolclass=NullPool
)
I spent more than a day to solve this problem. I hope my answer will save a lot of time for other developers. Perhaps there are other solutions, and if so, I will be glad to see them here.
Cannot figure out how to get db.stats in Mongoengine.
I've tried:
db = MongoEngine()
db.stats()
Also
db.Document.objects.stats()
db.Document.stats()
Also tried to execute JS, but nothing works and documentation is very poor.
db.stats it is a mongo's shell method
You can try something like that:
from mongoengine.connection import get_connection
con = get_connection()
con.get_database().eval('db.stats()')
con.get_database().eval('db.getCollectionInfos()')
Also I advise you to examine objects with dir method, sometimes it could be useful:
from pprint import pprint
pprint(dir(con))
MongoEngine is a wrapper for PyMongo. So to get the stats of a mongo database using mongoengine you could run the 'dbstats' mongodb api command on the database, using the pymongo command funtion like this:
from mongoengine import connect
client = connect()
db = client.get_database('your_database_name')
db_stats = db.command('dbstat')
coll_stats = db.command('collstats', 'your_colletion_name')
print(db_stats)
print(coll_stats)
I'm looking to increment the 'views' field by +1 in my document within my collection. I'm using mongodb atlas database to be included in my flask app. I've included my route here. Any suggestions would be great thanks.
#app.route('/view_count/<recipe_id>', methods=['POST'])
def view_count(recipe_id):
mongo.db.recipes.update_one({"_id": ObjectId(recipe_id)}, {"$inc": {'views': 1}})
return redirect(url_for('view_recipe.html'))
you queries are correct if you are using pymongo.
maybe, the problem are mongo.db.
Example
from bson import ObjectId
from pymongo import MongoClient
# connect to general db
client = MongoClient('mongodb://localhost:27017')
# mongo accept everything, so is ok these queries below
# OBS: client.db means connection with database called db inside mongo
client.db.recipes.insert_one({'_id':ObjectId(), 'views': 0})
client.db.recipes.find_one({}) # the insertion above work
client.db.recipes.update_one({}, {'$inc': {'views': 1}}) # have only one, so update they
but if you change:
client = MongoClient('mongodb://localhost:27017')
# with
client = MongoClient('mongodb://localhost:27017').db
# everything continue working, but now, the path to recipes is db.db.db.recipes
I have a Flask app that takes parameters from a web form, queries a DB with SQL Alchemy and returns Jinja-generated HTML showing a table with the results. I want to cache the calls to the DB. I looked into Redis (Using redis as an LRU cache for postgres), which led me to http://pythonhosted.org/Flask-Cache/.
Now I am trying to use Redis + Flask-Cache to cache the calls to the DB. Based on the Flask-Cache docs, it seems like I need to set up a custom Redis cache.
class RedisCache(BaseCache):
def __init__(self, servers, default_timeout=500):
pass
def redis(app, config, args, kwargs):
args.append(app.config['REDIS_SERVERS'])
return RedisCache(*args, **kwargs)
From there I would need to something like:
# not sure what to put for args or kwargs
cache = redis(app, config={'CACHE_TYPE': 'redis'})
app = Flask(__name__)
cache.init_app(app)
I have two questions:
What do I put for args and kwargs? What do these mean? How do I set up a Redis cache with Flask-Cache?
Once the cache is set up, it seems like I would want to somehow "memoize" the calls the DB so that if the method gets the same query it has the output cached. How do I do this? My best guess would be to wrap the call the SQL Alchemy in a method that could then be given memoize decorator? That way if two identical queries were passed to the method, Flask-Cache would recognize this and return to the appropriate response. I'm guessing that it would look like this:
#cache.memoize(timeout=50)
def queryDB(q):
return q.all()
This seems like a fairly common use of Redis + Flask + Flask-Cache + SQL Alchemy, but I am unable to find a complete example to follow. If someone could post one, that would be super helpful -- but for me and for others down the line.
You don't need to create custom RedisCache class. The docs is just teaching how you would create new backends that are not available in flask-cache. But RedisCache is already available in werkzeug >= 0.7, which you might have already installed because it is one of the core dependencies of flask.
This is how I could run the flask-cache with redis backend:
import time
from flask import Flask
from flask_cache import Cache
app = Flask(__name__)
cache = Cache(app, config={'CACHE_TYPE': 'redis'})
#cache.memoize(timeout=60)
def query_db():
time.sleep(5)
return "Results from DB"
#app.route('/')
def index():
return query_db()
app.run(debug=True)
The reason you're getting "ImportError: redis is not a valid FlaskCache backend" is probably because you don't have redis (python library) installed which you can simply install by:
pip install redis.
your redis args would look something like this:
cache = Cache(app, config={
'CACHE_TYPE': 'redis',
'CACHE_KEY_PREFIX': 'fcache',
'CACHE_REDIS_HOST': 'localhost',
'CACHE_REDIS_PORT': '6379',
'CACHE_REDIS_URL': 'redis://localhost:6379'
})
Putting the #cache.memoize over a method that grabs the info from the DB should work.
Using tornado, I want to create a bit of middleware magic that ensures that my SQLAlchemy sessions get properly closed/cleaned up so that objects aren't shared from one request to the next. The trick is that, since some of my tornado handlers are asynchronous, I can't just share one session for each request.
So I am left trying to create a ScopedSession that knows how to create a new session for each request. All I need to do is define a scopefunc for my code that can turn the currently executing request into a unique key of some sort, however I can't seem to figure out how to get the current request at any one point in time (outside of the scope of the current RequestHandler, which my function doesn't have access to either).
Is there something I can do to make this work?
You might want to associate the Session with the request itself (i.e. don't use scopedsession if it's not convenient). Then you can just say, request.session. Still needs to have hooks at the start/end for setup/teardown.
edit: custom scoping function
def get_current_tornado_request():
# TODO: ask on the Tornado mailing list how
# to acquire the request currently being invoked
Session = scoped_session(sessionmaker(), scopefunc=get_current_tornado_request)
(This is a 2017 answer to a 2011 question) As #Stefano Borini pointed out, easiest way in Tornado 4 is to just let the RequestHandler implicitly pass the session around. Tornado will track the handler instance state when using coroutine decorator patterns:
import logging
_logger = logging.getLogger(__name__)
from sqlalchemy import create_engine, exc as sqla_exc
from sqlalchemy.orm import sessionmaker, exc as orm_exc
from tornado import gen
from tornado.web import RequestHandler
from my_models import SQLA_Class
Session = sessionmaker(bind=create_engine(...))
class BaseHandler(RequestHandler):
#gen.coroutine
def prepare():
self.db_session = Session()
def on_finish():
self.db_session.close()
class MyHander(BaseHandler):
#gen.coroutine
def post():
SQLA_Object = self.db_session.query(SQLA_Class)...
SQLA_Object.attribute = ...
try:
db_session.commit()
except sqla_exc.SQLAlchemyError:
_logger.exception("Couldn't commit")
db_session.rollback()
If you really really need to asynchronously reference a SQL Alchemy session inside a declarative_base (which I would consider an anti-pattern since it over-couples the model to the application), Amit Matani has a non-working example here.