I'm developing an API with Flask and I cannot retrieve queries from a MySQL database I've connected with flask-sqlalchemy (not sqlalchemy alone). This is a pre-existing database downloaded from my client's PHPMyAdmin, so I haven't ran db.create_all(): I simply created the connection string in config.py, then instantiated db = SQLAchemy() and initialized it (db.init_app(app)) in my factory function (i'm using the factory pattern together with blueprints).
I've already checked and my computer is running the mysql process, the login credentials provided are correct and the database exists in my computer. I'm using MariaDB because I run Manjaro Linux.
This is the connection string, located in config.py:
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or "mariadb+mariadbconnector://dev:dev#localhost/desayunos56"
This is the relevant model. It was created using flask-sqlacodegen and then modified by me to only use the relevant columns within the table. At models.py:
from flask_sqlalchemy import SQLAlchemy
from app import db
# coding: utf-8
t_aus_postmeta = db.Table(
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
'aus_postmeta',
#db.Column('meta_id', db.BigInteger, nullable=False),
db.Column('post_id', db.BigInteger, nullable=False, server_default=db.FetchedValue()),
db.Column('meta_key', db.String(255, 'utf8mb4_unicode_ci')),
db.Column('meta_value', db.String(collation='utf8mb4_unicode_ci'))
)
And finally, this is the file with the error, views.py. It's a blueprint already registered to __init__.py. I created it only with the intention of checking if I could run queries, but I don't really intend to render anything from Flask:
from flask import render_template
from . import main
from .. import db
from app.models import t_aus_postmeta
#main.route("/", methods=["GET"])
def index():
result = t_aus_postmeta.query_by(post_id=786).first()
This is the error I get: AttributeError: 'Table' object has no attribute 'query_by'
I think it's noteworthy that, although my linter doesn't complain due to unresolved imports, when I use t_aus_postmeta I don't get any method suggestions.
All the questions I've checked are based on using sqlalchemy instead of flask-sqlalchemy. What could be causing this error? At this point, I'm at a loss.
I don't think that's the right way to create your model. Instead you should create it as a class, which will inherit from db.Model, that contains your query_by method.
models.py
class t_aus_postmeta(db.Model):
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
__tablename__ = 'aus_postmeta'
post_id = db.Column(db.BigInteger(), nullable=False, server_default=db.FetchedValue())
# rest of your columns...
If you do it this way a valid query would look like this:
t_aus_postmeta.query.filter_by('post_id').first()
Notice that this includes tutiplain's suggestion. I think you got your method name wrong. It's just query followed by a filter_by!
I can't find the API reference for the "query_by" method you are using. It seems there is no such method. Perhaps you meant "filter_by" instead?
Related
I have FastAPI Python application with routes that operate on a MongoDB instance.
The connection works fine, and I can query documents for my GET endpoints, but creating a new document from within FastAPI seems impossible.
I consistently get:
You have not defined a default connection
I have a standalone script that handles some data migration tasks and it uses the exact same DB class and Document models that the FastAPI app does, and that script is able to save documents to mongo perfectly fine. There is no difference in how the DB object is instantiated between the API and the script.
The DB class:
from os import getenv
from mongoengine import connect
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError
class Mongo:
#property
def target_db(self):
return 'some_db'
#property
def uri(self) -> str:
env_uri = getenv('MONGODB', None)
if env_uri is None:
raise DBError('MONGODB environment variable missing')
return env_uri.strip()
def connect(self) -> MongoClient:
try:
return connect(host=self.uri, db=self.target_db, alias=self.target_db)
except ServerSelectionTimeoutError as e:
raise ServerSelectionTimeoutError(e)
All of my DB models have meta attributes defining exactly what DB and collection to use:
class Thing(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Queries on existing documents succeed inside of a route definition:
results = Thing.objects.filter(**query)
# This returns things that I can iterate over
Document creation fails inside of a route definition:
new_thing = Thing(**creation_args)
new_thing.save()
Error:
mongoengine.connection.ConnectionFailure: You have not defined a default connection
What does that even mean? I know that I'm connected because I can query the db.
How is it possible that I can successfully query documents from Mongo but not save them?
Every suggestion I have seen online points to not having defined a db or alias in the call to mongoengine.connect, but I clearly am in my Mongo object, and even if that were true, surely I wouldn't be able to retrieve documents from the db collection...
The mongoengine Document models had malformatted meta attributes...
class Accounts(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Solution: db_alias needed to be changed to db.
I didn't find this through documentation, and definitely not through the extremely unhelpful error messages. I just tried it on a whim. Now everything works using the FastAPI framework.
i've an automap to work with an existing database in my app project. When i start it on my pc with flask run command, it works fine, and it saves data on the database. Now i'm trying to put my app on a debian server, when i run the app with gunicorn, it return me this error:
sqlalchemy.exc.ArgumentError: Mapper mapped class Applications->applications could not assemble any primary key columns for mapped table 'applications'.
This is my models.py:
from flask_login import UserMixin
from sqlalchemy import Column, Date, Integer, String, create_engine
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.ext.automap import automap_base
Base = automap_base()
class User(UserMixin, Base, db.Model):
__tablename__ = "accounts"
def __repr__(self):
return "User {}".format(self.acc_username)
def get_id(self):
return (self.acc_id)
class Char(Base, db.Model):
__tablename__ = "characters"
def __repr__(self):
return self.char_name.replace('_',' ')
class Applications(Base, db.Model):
__tablename__="applications"
def __repr__(self):
return "Application ".format(self.id)
Base.prepare(db.engine, reflect=True)
#login.user_loader
def load_user(id):
return User.query.get(int(id))
I've already tried to specify the primary key column into the classes, but after this, it does not find the other parameters, so it is as if it wanted me to write them one by one, as if the automap from the existing database was not working
This is the db declaration:
db = SQLAlchemy(app)
Database config:
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'app.db')
Database URL:
DATABASE_URL=mysql+pymysql://root:xxxx#localhost:3306/xxxx
Without declaring your columns, keys and indexes in the codebase it might lead to such cases if you are switching from SQLLite to MySQL or from local to server for example (at least from my previous experience)
In your case I would do one of the following things:
Apply better automap functionality as the documentation states (maybe with reflections):
https://docs.sqlalchemy.org/en/13/orm/extensions/automap.html
reverse engineer the db structure, keys and relations into models by hand or by some tools like sqlacodegen
url: https://pypi.org/project/sqlacodegen/
My preference: Go with flask migrate for better convenience and describe all your models, keys, relationships, indexes in the code
url: https://flask-migrate.readthedocs.io/en/latest/
Describing your models better and at full as well as start migrations from flask side, that is the sane way to orchestrate and have good results every time.
I wonder how SQLAlchemy tracks changes that are made outside of SQLAlchemy (manual change for example)?
Until now, I used to put db.session.commit() before each value that can be changed outside of SQLAlchemy. Is this a bad practice? If yes, is there a better way to make sure I'll have the latest value? I've actually created a small script below to check that and apparently, SQLAlchemy can detect external changes without db.session.commit() being called each time.
Thanks,
P.S: I really want to understand how all the magics happen behind SQLAlchemy work. Does anyone has a pointer to some docs explaining the behind-the-scenes work of SQLAlchemy?
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# Use SQLlite so this example can be run anywhere.
# On Mysql, the same behaviour is observed
basedir = os.path.abspath(os.path.dirname(__file__))
db_path = os.path.join(basedir, "app.db")
app.config["SQLALCHEMY_DATABASE_URI"] = 'sqlite:///' + db_path
db = SQLAlchemy(app)
# A small class to use in the test
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
# Create all the tables and a fake data
db.create_all()
user = User(name="old name")
db.session.add(user)
db.session.commit()
#app.route('/')
def index():
"""The scenario: the first request returns "old name" as expected.
Then, I modify the name of User:1 to "new name" directly on the database.
On the next request, "new name" will be returned.
My question is: how SQLAlchemy knows that the value has been changed?
"""
# Before, I always use db.session.commit()
# to make sure that the latest value is fetched.
# Without db.session.commit(),
# SQLAlchemy still can track change made on User.name
# print "refresh db"
# db.session.commit()
u = User.query.filter_by(id=1).first()
return u.name
app.run(debug=True)
The "cache" of a session is a dict in its identity_map (session.identity_map.dict) that only caches objects for the time of "a single business transaction" , as answered here https://stackoverflow.com/a/5869795.
For different server requests, you have different identity_map. It is not a shared object.
In your scenario, you requested the server 2 separated times. The second time, the identity_map is a new one (you can easily check it by printing out its pointer), and has nothing in cache. Consequently the session will request the database and get you the updated answer. It does not "track change" as you might think.
So, to your question, you don't need to do session.commit() before a query if you have not done a query for the same object in the same server request.
Hope it helps.
I'm using a flask setup from a while and now trying to install Flask-Blogging module on it. Current modules:
- Flask-sqlalchemy with postgres
- Flask-login
- Flask-Blogging (new)
My application.py looks like this:
from flask import Flask
from flask import session
from flask.ext.blogging import SQLAStorage, BloggingEngine
from flask.ext.login import LoginManager
from flask.ext.sqlalchemy import SQLAlchemy
'''
The main application setup. The order of things is important
in this file.
'''
app = Flask(__name__)
app.config.from_object('config.base')
app.config.from_envvar('APP_CONFIG_FILE')
'''
Initialize database
'''
db = SQLAlchemy(app)
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
the last two lines are the only new things I added (other than the imports). Suddenly now I'm getting error about duplicate table names:
sqlalchemy.exc.InvalidRequestError: Table 'customer' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
Any ideas what am I doing wrong? I couldn't find much documentation about Flask-Blogging other than:
http://flask-blogging.readthedocs.org/en/latest/
You get this error because in SQLAStorage.__init__ there is this line:
self._metadata.reflect(bind=self._engine)
This will look at your database and create sqlalchemy table things for all the tables currently in your database.
Thus if your database contains a table called 'customer' the line in your code:
storage = SQLAStorage(db=db)
will automatically model an sqlalchemy table called 'customer' for you.
Now... no doubt you have your own database model definitions somewhere, probably in another python module, something like:
class Customer(db.Model):
id = db.Column(db.Integer, primary_key=True)
...
Since this class definition defines a table called 'customer' and since SQLAStorage has already defined a table called 'customer' you get the exception as soon as your class Customer is imported.
Some ways to work around this problem are:
Import your database definition modules before instantiating SQLAStorage
'''
Initialize database
'''
db = SQLAlchemy(app)
import ankit.db_models # import my db models here so SQLAStorage doesn't do it first
'''
Initialize blogger
'''
storage = SQLAStorage(db=db)
blog_engine = BloggingEngine(app, storage)
or
Tell SQLAStorage to use its own metadata
By passing the db param to SQLAStorage.__init__ you are telling it to use your metadata. You can instead just pass the engine parameter and it will create its own metadata.
storage = SQLAStorage(engine=db.engine)
Imagine that I have one table in my project with some rows in it.
For example:
# -*- coding: utf-8 -*-
import sqlalchemy as sa
from app import db
class Article(db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
I'm using Flask-SQLAlchemy, so db.session is scoped session object.
I saw in https://github.com/zzzeek/sqlalchemy/blob/master/examples/versioned_history/history_meta.py
but i can't understand how to use it with my existing tables and anymore how to start it. (I get ArgumentError: Session event listen on a scoped_session requires that its creation callable is associated with the Session class. error when I pass db.session in versioned_session func)
From versioning I need the following:
1) query for old versions of object
2) query old versions by date range when they changed
3) revert old state to existing object
4) add additional info to history table when version is creating (for example editor user_id, date_edit, remote_ip)
Please, tell me what are the best practicies for my case and if you can add a little working example for it.
You can work around that error by attaching the event handler to the SignallingSession class[1] instead of the created session object:
from flask.ext.sqlalchemy import SignallingSession
from history_meta import versioned_session, Versioned
# Create your Flask app...
versioned_session(SignallingSession)
db = SQLAlchemy(app)
class Article(Versioned, db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
The sample code creates parallel tables with a _history suffix and an additional changed datetime column. Querying for old versions is just a matter of looking in that table.
For managing the extra fields, I would put them on your main table, and they'll automatically be kept track of in the history table.
[1] Note, if you override SQLAlchemy.create_session() to use a different session class, you should adjust the class you pass to versioned_session.
I think the problem is you're running into this bug: https://github.com/mitsuhiko/flask-sqlalchemy/issues/182
One workaround would be to stop using flask-sqlalchemy and configure sqlalchemy yourself.