I have FastAPI Python application with routes that operate on a MongoDB instance.
The connection works fine, and I can query documents for my GET endpoints, but creating a new document from within FastAPI seems impossible.
I consistently get:
You have not defined a default connection
I have a standalone script that handles some data migration tasks and it uses the exact same DB class and Document models that the FastAPI app does, and that script is able to save documents to mongo perfectly fine. There is no difference in how the DB object is instantiated between the API and the script.
The DB class:
from os import getenv
from mongoengine import connect
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError
class Mongo:
#property
def target_db(self):
return 'some_db'
#property
def uri(self) -> str:
env_uri = getenv('MONGODB', None)
if env_uri is None:
raise DBError('MONGODB environment variable missing')
return env_uri.strip()
def connect(self) -> MongoClient:
try:
return connect(host=self.uri, db=self.target_db, alias=self.target_db)
except ServerSelectionTimeoutError as e:
raise ServerSelectionTimeoutError(e)
All of my DB models have meta attributes defining exactly what DB and collection to use:
class Thing(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Queries on existing documents succeed inside of a route definition:
results = Thing.objects.filter(**query)
# This returns things that I can iterate over
Document creation fails inside of a route definition:
new_thing = Thing(**creation_args)
new_thing.save()
Error:
mongoengine.connection.ConnectionFailure: You have not defined a default connection
What does that even mean? I know that I'm connected because I can query the db.
How is it possible that I can successfully query documents from Mongo but not save them?
Every suggestion I have seen online points to not having defined a db or alias in the call to mongoengine.connect, but I clearly am in my Mongo object, and even if that were true, surely I wouldn't be able to retrieve documents from the db collection...
The mongoengine Document models had malformatted meta attributes...
class Accounts(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Solution: db_alias needed to be changed to db.
I didn't find this through documentation, and definitely not through the extremely unhelpful error messages. I just tried it on a whim. Now everything works using the FastAPI framework.
Related
I'm developing an API with Flask and I cannot retrieve queries from a MySQL database I've connected with flask-sqlalchemy (not sqlalchemy alone). This is a pre-existing database downloaded from my client's PHPMyAdmin, so I haven't ran db.create_all(): I simply created the connection string in config.py, then instantiated db = SQLAchemy() and initialized it (db.init_app(app)) in my factory function (i'm using the factory pattern together with blueprints).
I've already checked and my computer is running the mysql process, the login credentials provided are correct and the database exists in my computer. I'm using MariaDB because I run Manjaro Linux.
This is the connection string, located in config.py:
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or "mariadb+mariadbconnector://dev:dev#localhost/desayunos56"
This is the relevant model. It was created using flask-sqlacodegen and then modified by me to only use the relevant columns within the table. At models.py:
from flask_sqlalchemy import SQLAlchemy
from app import db
# coding: utf-8
t_aus_postmeta = db.Table(
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
'aus_postmeta',
#db.Column('meta_id', db.BigInteger, nullable=False),
db.Column('post_id', db.BigInteger, nullable=False, server_default=db.FetchedValue()),
db.Column('meta_key', db.String(255, 'utf8mb4_unicode_ci')),
db.Column('meta_value', db.String(collation='utf8mb4_unicode_ci'))
)
And finally, this is the file with the error, views.py. It's a blueprint already registered to __init__.py. I created it only with the intention of checking if I could run queries, but I don't really intend to render anything from Flask:
from flask import render_template
from . import main
from .. import db
from app.models import t_aus_postmeta
#main.route("/", methods=["GET"])
def index():
result = t_aus_postmeta.query_by(post_id=786).first()
This is the error I get: AttributeError: 'Table' object has no attribute 'query_by'
I think it's noteworthy that, although my linter doesn't complain due to unresolved imports, when I use t_aus_postmeta I don't get any method suggestions.
All the questions I've checked are based on using sqlalchemy instead of flask-sqlalchemy. What could be causing this error? At this point, I'm at a loss.
I don't think that's the right way to create your model. Instead you should create it as a class, which will inherit from db.Model, that contains your query_by method.
models.py
class t_aus_postmeta(db.Model):
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
__tablename__ = 'aus_postmeta'
post_id = db.Column(db.BigInteger(), nullable=False, server_default=db.FetchedValue())
# rest of your columns...
If you do it this way a valid query would look like this:
t_aus_postmeta.query.filter_by('post_id').first()
Notice that this includes tutiplain's suggestion. I think you got your method name wrong. It's just query followed by a filter_by!
I can't find the API reference for the "query_by" method you are using. It seems there is no such method. Perhaps you meant "filter_by" instead?
I have an application running in production that I've built for a single client that I want to convert to support multiple "tenants".
Currently I am using a Postgres database where all my data resides in a single database in the default public schema. I would like to isolate each tenant to a separate Postgres schema. Ideally, my application's UI would make a call to my API using the tenant's subdomain. In before_request I would somehow be able to set all database queries during the current request context to only query that tenant's schema, is this possible?
I envisage an ideal solution to be something similar to this contrived example:
from flask import Flask, request, jsonify
from pony.orm import Database, Required
app = Flask(__name__)
db = Database(**{<db_connection_dict>})
class User(db.Entity):
email = Required(str)
password = Required(str)
#classmethod
def login(cls, email: str, password: str) -> str:
user = cls.get(lambda u: u.email.lower() == email.lower())
if not user:
return None
password_is_valid = <method_to_check_hashed_pasword>
if not password_is_valid:
return None
return <method_to_generate_jwt>
db.generate_mapping()
#app.before_request
def set_tenant():
tenant_subdomain = request.host.split(".")[0]
// MISSING STEP.. set_schema is a fictitous method, does something similar to this exist?
db.set_schema(schema=tenant_subdomain)??
#app.route("auth/login", methods=["POST"]
def login_route():
data = request.get_json()
jwt = User.login(data["email"], data["password"])
if not jwt:
return make_response({}, 403)
return make_response(jsonify(data=jwt), 200)
I've come across an interesting/simple example using SQLAlchemy. If not possible with PonyORM I may consider porting my models over to SQLAlchemy but would miss the simplicity of Pony :(
I thought about possibly using the Database.on_connect method to do something as such but not sure if if anyone has any other ideas or if this would even work properly in production. I suspect not because if I had two separate tenants querying the database they would overwrite the search path..
#db.on_connect()
def set_request_context_tenant_schema(db, connection) -> None:
subdomain = request.host.split(".")[0]
cursor = connection.cursor()
cursor.execute(f"SET search_path TO {subdomain}, public;")
I'm using a flask setup from a while and now trying to install Flask-Blogging module on it. Current modules:
- Flask-sqlalchemy with postgres
- Flask-login
- Flask-Blogging (new)
My application.py looks like this:
from flask import Flask
from flask import session
from flask.ext.blogging import SQLAStorage, BloggingEngine
from flask.ext.login import LoginManager
from flask.ext.sqlalchemy import SQLAlchemy
'''
The main application setup. The order of things is important
in this file.
'''
app = Flask(__name__)
app.config.from_object('config.base')
app.config.from_envvar('APP_CONFIG_FILE')
'''
Initialize database
'''
db = SQLAlchemy(app)
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
the last two lines are the only new things I added (other than the imports). Suddenly now I'm getting error about duplicate table names:
sqlalchemy.exc.InvalidRequestError: Table 'customer' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
Any ideas what am I doing wrong? I couldn't find much documentation about Flask-Blogging other than:
http://flask-blogging.readthedocs.org/en/latest/
You get this error because in SQLAStorage.__init__ there is this line:
self._metadata.reflect(bind=self._engine)
This will look at your database and create sqlalchemy table things for all the tables currently in your database.
Thus if your database contains a table called 'customer' the line in your code:
storage = SQLAStorage(db=db)
will automatically model an sqlalchemy table called 'customer' for you.
Now... no doubt you have your own database model definitions somewhere, probably in another python module, something like:
class Customer(db.Model):
id = db.Column(db.Integer, primary_key=True)
...
Since this class definition defines a table called 'customer' and since SQLAStorage has already defined a table called 'customer' you get the exception as soon as your class Customer is imported.
Some ways to work around this problem are:
Import your database definition modules before instantiating SQLAStorage
'''
Initialize database
'''
db = SQLAlchemy(app)
import ankit.db_models # import my db models here so SQLAStorage doesn't do it first
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
or
Tell SQLAStorage to use its own metadata
By passing the db param to SQLAStorage.__init__ you are telling it to use your metadata. You can instead just pass the engine parameter and it will create its own metadata.
storage = SQLAStorage(engine=db.engine)
I am trying to separate some of my database logic into its own helper module. This is because I have several routes that perform the same database functions, and I don't want to keep repeating the same code. I'm a bit confused on the db session scopes.
From the SQLAlchemy docs:
Some web frameworks include infrastructure to assist in the task of aligning the lifespan of a Session with that of a web request. This includes products such as Flask-SQLAlchemy, for usage in conjunction with the Flask web framework...
I think this means my db session scope is contained within a particular route since I'm using Flask and Flask-SQLAlchemy, so I came up with the following:
init.py
app = Flask(__name__)
db = SQLAlchemy(app)
routes.py
from init import db
#app.route('/one')
def one():
form = MyForm()
if form.validate_on_submit():
myhelper.saveStuff1(form.stuff1.data)
myhelper.saveStuff2(form.stuff2.data)
db.session.commit()
return render_template(...)
#app.route('/two')
def two():
form = MyForm()
if form.validate_on_submit():
myhelper.saveStuff1(form.stuff1.data)
myhelper.saveStuff2(form.stuff2.data)
myhelper.saveStuff3(form.stuff3.data)
db.session.commit()
return render_template(...)
myhelper.py
from init import db
# Add new Item
def saveStuff1(formdata):
db.session.add(Item(name=formdata))
# Update Item
def saveStuff2(formdata):
item = Item.query.filter_by(name=formdata).first()
item.description = 'default'
db.session.add(item)
# etc...
Would this be the correct way for structuring my helpers? I'm worried that from init import db will cause problems with scoping since it's imported in both files, or if this overall code pattern will cause other problems.
SQLAlchemy's session scope is not related to Python's variable scope. So no, importing db in multiple places as you've shown won't cause problems. Regarding the session scope, Flask-SQLAlchemy takes care of that for you, so you can ignore (or not worry about) the discussion of scope in the SQLAlchemy docs.
I am trying to implement a many-to-many scenario using peewee python ORM and I'd like some unit tests. Peewee tutorial is great but it assumes that database is defined at module level then all models are using it. My situation is different: I don't have a source code file (a module from python's point of view) with tests which I run explicitly, I am using nose which collects tests from that file and runs them.
How do I use a custom database only for models instantiated in tests (which are being run by nose)? My goal is to use an in-memory database for tests only, to speedup the testing process.
I just pushed a commit today that makes this easier.
The fix is in the form of a context manager which allows you to override the database of a model:
from unittest import TestCase
from playhouse.test_utils import test_database
from peewee import *
from my_app.models import User, Tweet
test_db = SqliteDatabase(':memory:')
class TestUsersTweets(TestCase):
def create_test_data(self):
# ... create a bunch of users and tweets
for i in range(10):
User.create(username='user-%d' % i)
def test_timeline(self):
with test_database(test_db, (User, Tweet)):
# This data will be created in `test_db`
self.create_test_data()
# Perform assertions on test data inside ctx manager.
self.assertEqual(Tweet.timeline('user-0') [...])
# once we exit the context manager, we're back to using the normal database
See the documentation and have a look at the example testcases:
Context manager
Testcases showing how to use
To not include context manager in every test case, overwrite run method.
# imports and db declaration
class TestUsersTweets(TestCase):
def run(self, result=None):
with test_database(test_db, (User, Tweet)):
super(TestUsersTweets, self).run(result)
def test_timeline(self):
self.create_test_data()
self.assertEqual(Tweet.timeline('user-0') [...])
I took the great answers from #coleifer and #avalanchy and took them one step further.
In order to avoid overriding the run method on every TestCase subclass, you can use a base class... and I also like the idea of not having to write down every model class I work with, so I came up with this
import unittest
import inspect
import sys
import peewee
from abc import ABCMeta
from playhouse.test_utils import test_database
from business_logic.models import *
test_db = peewee.SqliteDatabase(':memory:')
class TestCaseWithPeewee(unittest.TestCase):
"""
This abstract class is used to "inject" the test database so that the tests don't use the real sqlite db
"""
__metaclass__ = ABCMeta
def run(self, result=None):
model_classes = [m[1] for m in inspect.getmembers(sys.modules['business_logic.models'], inspect.isclass) if
issubclass(m[1], peewee.Model) and m[1] != peewee.Model]
with test_database(test_db, model_classes):
super(TestCaseWithPeewee, self).run(result)
so, now I can just inherit from TestCaseWithPeewee and don't have to worry about anything else other than the test
Apparently, there's a new approach for the scenario described, where you can bind the models in the setUp() method of your test case:
Example from the official docs:
# tests.py
import unittest
from my_app.models import EventLog, Relationship, Tweet, User
MODELS = [User, Tweet, EventLog, Relationship]
# use an in-memory SQLite for tests.
test_db = SqliteDatabase(':memory:')
class BaseTestCase(unittest.TestCase):
def setUp(self):
# Bind model classes to test db. Since we have a complete list of
# all models, we do not need to recursively bind dependencies.
test_db.bind(MODELS, bind_refs=False, bind_backrefs=False)
test_db.connect()
test_db.create_tables(MODELS)
def tearDown(self):
# Not strictly necessary since SQLite in-memory databases only live
# for the duration of the connection, and in the next step we close
# the connection...but a good practice all the same.
test_db.drop_tables(MODELS)
# Close connection to db.
test_db.close()
# If we wanted, we could re-bind the models to their original
# database here. But for tests this is probably not necessary.
When using test_database I encountered problems with test_db not being initialized:
nose.proxy.Exception: Error, database not properly initialized before opening connection
-------------------- >> begin captured logging << --------------------
peewee: DEBUG: ('SELECT "t1"."id", "t1"."name", "t1"."count" FROM "counter" AS t1', [])
--------------------- >> end captured logging << ---------------------
I eventually fixed this by passing create_tables=True like so:
def test_timeline(self):
with test_database(test_db, (User, Tweet), create_tables=True):
# This data will be created in `test_db`
self.create_test_data()
According to the docs create_tables should default to True but it seems that isn't the case in the latest release of peewee.
For anyone who's using pytest, here's how I did it:
conftest.py
MODELS = [User, Tweet] # Also add get_through_model() for ManyToMany fields
test_db = SqliteDatabase(':memory:')
test_db.bind(MODELS, bind_refs=False, bind_backrefs=False)
test_db.connect()
test_db.create_tables(MODELS)
#pytest.fixture(autouse=True)
def in_mem_db(mocker):
mocked_db = mocker.patch("database.db", autospec=True) # "database.db" is where your app's code imports db from
mocked_db.return_value = test_db
return mocked_db
And voila, all your tests run with an in-memory sqlite database.