Python SQLAlchemy and SQLite - foreign key [duplicate] - python

The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately!
sqlite> PRAGMA foreign_keys = ON;
I am using SQLAlchemy -- how can I make sure this always gets turned on?
What I have tried is this:
engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True)
engine.execute('pragma foreign_keys=on')
...but it is not working!...What am I missing?
EDIT:
I think my real problem is that I have more than one version of SQLite installed, and Python is not using the latest one!
>>> import sqlite3
>>> print sqlite3.sqlite_version
3.3.4
But I just downloaded 3.6.23 and put the exe in my project directory!
How can I figure out which .exe it's using, and change it?

For recent versions (SQLAlchemy ~0.7) the SQLAlchemy homepage says:
PoolListener is deprecated. Please refer to PoolEvents.
Then the example by CarlS becomes:
engine = create_engine(database_url)
def _fk_pragma_on_connect(dbapi_con, con_record):
dbapi_con.execute('pragma foreign_keys=ON')
from sqlalchemy import event
event.listen(engine, 'connect', _fk_pragma_on_connect)

Building on the answers from conny and shadowmatter, here's code that will check if you are using SQLite3 before emitting the PRAGMA statement:
from sqlalchemy import event
from sqlalchemy.engine import Engine
from sqlite3 import Connection as SQLite3Connection
#event.listens_for(Engine, "connect")
def _set_sqlite_pragma(dbapi_connection, connection_record):
if isinstance(dbapi_connection, SQLite3Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.close()

I now have this working:
Download the latest sqlite and pysqlite2 builds as described above: make sure correct versions are being used at runtime by python.
import sqlite3
import pysqlite2
print sqlite3.sqlite_version # should be 3.6.23.1
print pysqlite2.__path__ # eg C:\\Python26\\lib\\site-packages\\pysqlite2
Next add a PoolListener:
from sqlalchemy.interfaces import PoolListener
class ForeignKeysListener(PoolListener):
def connect(self, dbapi_con, con_record):
db_cursor = dbapi_con.execute('pragma foreign_keys=ON')
engine = create_engine(database_url, listeners=[ForeignKeysListener()])
Then be careful how you test if foreign keys are working: I had some confusion here. When using sqlalchemy ORM to add() things my import code was implicitly handling the relation hookups so could never fail. Adding nullable=False to some ForeignKey() statements helped me here.
The way I test sqlalchemy sqlite foreign key support is enabled is to do a manual insert from a declarative ORM class:
# example
ins = Coverage.__table__.insert().values(id = 99,
description = 'Wrong',
area = 42.0,
wall_id = 99, # invalid fkey id
type_id = 99) # invalid fkey_id
session.execute(ins)
Here wall_id and type_id are both ForeignKey()'s and sqlite throws an exception correctly now if trying to hookup invalid fkeys. So it works! If you remove the listener then sqlalchemy will happily add invalid entries.
I believe the main problem may be multiple sqlite3.dll's (or .so) lying around.

As a simpler approach if your session creation is centralised behind a Python helper function (rather than exposing the SQLA engine directly), you can just issue session.execute('pragma foreign_keys=on') before returning the freshly created session.
You only need the pool listener approach if arbitrary parts of your application may create SQLA sessions against the database.

From the SQLite dialect page:
SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, however by default these constraints have no effect on the operation of the table.
Constraint checking on SQLite has three prerequisites:
At least version 3.6.19 of SQLite must be in use
The SQLite libary must be compiled without the SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER symbols enabled.
The PRAGMA foreign_keys = ON statement must be emitted on all connections before use.
SQLAlchemy allows for the PRAGMA statement to be emitted automatically for new connections through the usage of events:
from sqlalchemy.engine import Engine
from sqlalchemy import event
#event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.close()

One-liner version of conny's answer:
from sqlalchemy import event
event.listen(engine, 'connect', lambda c, _: c.execute('pragma foreign_keys=on'))

I had the same problem before (scripts with foreign keys constraints were going through but actuall constraints were not enforced by the sqlite engine); got it solved by:
downloading, building and installing the latest version of sqlite from here: sqlite-sqlite-amalgamation; before this I had sqlite 3.6.16 on my ubuntu machine; which didn't support foreign keys yet; it should be 3.6.19 or higher to have them working.
installing the latest version of pysqlite from here: pysqlite-2.6.0
after that I started getting exceptions whenever foreign key constraint failed
hope this helps, regards

If you need to execute something for setup on every connection, use a PoolListener.

Enforce Foreign Key constraints for sqlite when using Flask + SQLAlchemy.
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
def create_app(config: str=None):
app = Flask(__name__, instance_relative_config=True)
if config is None:
app.config.from_pyfile('dev.py')
else:
logger.debug('Using %s as configuration', config)
app.config.from_pyfile(config)
db.init_app(app)
# Ensure FOREIGN KEY for sqlite3
if 'sqlite' in app.config['SQLALCHEMY_DATABASE_URI']:
def _fk_pragma_on_connect(dbapi_con, con_record): # noqa
dbapi_con.execute('pragma foreign_keys=ON')
with app.app_context():
from sqlalchemy import event
event.listen(db.engine, 'connect', _fk_pragma_on_connect)
Source:
https://gist.github.com/asyd/a7aadcf07a66035ac15d284aef10d458

Related

Dash app in django-plotly-dash breaking migrate, how to stop it executing in a migrate?

I am using django-plotly-dash so have a Django template that hosts a Dash app.
(The Dash app is using raw SQL queries over psycopg2)
When I do a django migrate, this dash raw SQL fails because the tables have not yet been created.
Thus halting the migrate. aka Catch22.
Anyone know if there's a way to stop the Dash apps being executed during a migrate ?
Responding to comment from Ian S (thanks)...
Here's a reduced version of the query string:
queryStr = '''
SELECT veh.number as veh,
dh.rcmdh_number as dh,
from
rcm_rcmveh as veh,
rcma_rcmdh as dh,
where sige.sig_id = sig.id
AND sig.unit_id = veh .id
'''
cnx = psycopg2.connect(
host=POSTGRES_ADDRESS,
database=POSTGRES_DBNAME,
user=POSTGRES_USERNAME,
password=POSTGRES_PASSWORD)
df = pd.read_sql_query(queryStr, cnx)
The query doesn't run because rcm_rcmveh doesn't exist yet because the migration is in the process of being run.
This is in the top level of the dashapp.py module.
So thinking about it, I probably need to put it inside a function in the module so that it isn't run when the module is scanned by Django on startup ?

How to enable / define trig functions for Session() of a sqlite database?

I'm very new to programming and currently developing a flask app based on sqlite3 db using sclalchemy.
I have realized that no trigonometric functions are by default available using sqlite.
And I read many many guides and SO questions, which propose:
Define the functions myself using C/C++
No such acos function exists
How to make a function for atan. I want to get result manually for SQLLITE
acos function in Sqlite
Load the extensions (created by myself, or from www.sqlite.org)
How to load a SQLite3 extension in SQLAlchemy?
Or alternatively just enable the math functions SQLITE_ENABLE_MATH_FUNCTIONS
Logarithm function in sqlite query?
and here
https://www.sqlite.org/lang_mathfunc.html
Lastly,
I found a another one:
https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function
Basically create_function on connection.
Example from the source:
import sqlite3
import md5
def md5sum(t):
return md5.md5(t).hexdigest()
con = sqlite3.connect(":memory:")
con.create_function("md5", 1, md5sum)
cur = con.cursor()
cur.execute("select md5(?)", ("foo",))
print cur.fetchone()[0]
I have absolutely no knowledge of C/C++, therefore the 4th solution seemed the easiest to implement in a Flask app. However, for my queries I use Session() and pass it sqlalchemy commands. Since I am very in need of at least one solution working for using trig functions, my question is if there is a way to pass the connection to the session? (So i define trig functions for a connection and then session inherits connection?)
If it doesn't make sense,
I would be very happy to implement other solutions.
For instance the guide from solution two suggests this:
from sqlalchemy.event import listen
# initialization routine
# app: this Flask application
# db: the database, see the question
db_collate = 'sk_SK.UTF-8' # Slovak language for example
def load_extension(dbapi_conn, unused):
dbapi_conn.enable_load_extension(True)
dbapi_conn.load_extension('/path/to/libSqliteIcu.so')
dbapi_conn.enable_load_extension(False)
dbapi_conn.execute("SELECT icu_load_collation(?, 'ICU_EXT_1')", (db_collate,))
with app.app_context():
listen(db.engine, 'connect', load_extension)
However I do not understand if this is a universal solution,
since i've no idea what does these lines mean:
db_collate = 'sk_SK.UTF-8'
dbapi_conn.execute("SELECT icu_load_collation(?, 'ICU_EXT_1')", (db_collate,))
In my case I found this file extension-funtion.c which I was hoping to implement in my flask web app. (It has all trig functions written in C; the link to sqlite official website)
In the end I will be happy to do something like this:
query = session.query(Parent).filter(cos(Parent.age) < 0.5).all()
where cos is either defined or preinstalled or enabled but is still doing the cos.
Please bear with me, I'm coding only for a month and lacking the theory, which I am just starting to deep in.
P.S. if there is some essential read that I cannot move on without, pls let me know also!
Thanks!
EDIT 1
Yes, I am trying to get the distance on a sphere in a Flaks app using sqlalchemy Session()
I tried to implement the solution that #IljaEverilä suggested
That is how now my init.py looks like (the db section)
db = SQLAlchemy(app)
engine = create_engine('sqlite:///app.db', echo=False)
#event.listens_for(db.get_engine(), 'connect')
def create_math_functions_on_connect(dbapi_connection, connection_record):
dbapi_connection.create_function('sin', 1, math.sin)
dbapi_connection.create_function('cos', 1, math.cos)
dbapi_connection.create_function('acos', 1, math.acos)
dbapi_connection.create_function('radians', 1, math.radians)
migrate = Migrate(app, db)
login = LoginManager(app)
login.login_view = 'login'
login.init_app(app)
Session = sessionmaker()
Session.configure(bind=engine)
However, when I make a new session and query the db, I get the error:
>>> from app import Session
>>> session = Session()
>>> x = session.query(Parent).filter(cos(Parent.age) < 0.5).all()
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<console>", line 1, in <module>
NameError: name 'cos' is not defined
Thank you for bearing with me! I figured out one of the ways to make it work!
My settings stayed the same:
db = SQLAlchemy(app)
engine = create_engine('sqlite:///app.db', echo=False)
#event.listens_for(db.get_engine(), 'connect')
def create_math_functions_on_connect(dbapi_connection, connection_record):
dbapi_connection.create_function('sin', 1, math.sin)
dbapi_connection.create_function('cos', 1, math.cos)
dbapi_connection.create_function('acos', 1, math.acos)
dbapi_connection.create_function('radians', 1, math.radians)
migrate = Migrate(app, db)
login = LoginManager(app)
login.login_view = 'login'
login.init_app(app)
Session = sessionmaker()
Session.configure(bind=engine)
I did not figure out how to pass defined functions to sqlalchemy, but I used raw sql in the filter clause instead and it worked!
from app import Session
from sqlalchemy.sql import text
session = Session()
query = session.query(Parent).filter(text('cos(Parent.age)>0.5')).all()
Appreciate if you can share the way to pass it (defined functions) to sqlalchemy, but otherwise it works!

SQLAlchemy, scoped_session - raw SQL INSERT doesn't write to DB

I have a Pyramid / SQLAlchemy, MySQL python app.
When I execute a raw SQL INSERT query, nothing gets written to the DB.
When using ORM, however, I can write to the DB. I read the docs, I read up about the ZopeTransactionExtension, read a good deal of SO questions, all to no avail.
What hasn't worked so far:
transaction.commit() - nothing is written to the DB. I do realize this statement is necessary with ZopeTransactionExtension but it just doesn't do the magic here.
dbsession().commit - doesn't work since I'm using ZopeTransactionExtension
dbsession().close() - nothing written
dbsession().flush() - nothing written
mark_changed(session) -
File "/home/dev/.virtualenvs/sc/local/lib/python2.7/site-packages/zope/sqlalchemy/datamanager.py", line 198, in join_transaction
if session.twophase:
AttributeError: 'scoped_session' object has no attribute 'twophase'"
What has worked but is not acceptable because it doesn't use scoped_session:
engine.execute(...)
I'm looking for how to execute raw SQL with a scoped_session (dbsession() in my code)
Here is my SQLAlchemy setup (models/__init__.py)
def dbsession():
assert (_dbsession is not None)
return _dbsession
def init_engines(settings, _testing_workarounds=False):
import zope.sqlalchemy
extension = zope.sqlalchemy.ZopeTransactionExtension()
global _dbsession
_dbsession = scoped_session(
sessionmaker(
autoflush=True,
expire_on_commit=False,
extension=extension,
)
)
engine = engine_from_config(settings, 'sqlalchemy.')
_dbsession.configure(bind=engine)
Here is a python script I wrote to isolate the problem. It resembles the real-world environment of where the problem occurs. All I want is to make the below script insert the data into the DB:
# -*- coding: utf-8 -*-
import sys
import transaction
from pyramid.paster import setup_logging, get_appsettings
from sc.models import init_engines, dbsession
from sqlalchemy.sql.expression import text
def __main__():
if len(sys.argv) < 2:
raise RuntimeError()
config_uri = sys.argv[1]
setup_logging(config_uri)
aa = init_engines(get_appsettings(config_uri))
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
print list(session.execute("""SELECT * from operations""").fetchall()) # prints inserted data
transaction.commit()
print list(session.execute("""SELECT * from operations""").fetchall()) # doesn't print inserted data
if __name__ == '__main__':
__main__()
What is interesting, if I do:
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
op = Operation(generated_description='aa', description='oo')
session.add(op)
then the first print outputs the raw SQL inserted row ('hello2' 'world'), and the second print prints both rows, and in fact both rows are inserted into the DB.
I cannot comprehend why using an ORM insert alongside raw SQL "fixes" it.
I really need to be able to call execute() on a scoped_session to insert data into the DB using raw SQL. Any advice?
It has been a while since I mixed raw sql with sqlalchemy, but whenever you mix them, you need to be aware of what happens behind the scenes with the ORM. First, check the autocommit flag. If the zope transaction is not configured correctly, the ORM insert might be triggering a commit.
Actually, after looking at the zope docs, it seems manual execute statements need an extra step. From their readme:
By default, zope.sqlalchemy puts sessions in an 'active' state when they are
first used. ORM write operations automatically move the session into a
'changed' state. This avoids unnecessary database commits. Sometimes it
is necessary to interact with the database directly through SQL. It is not
possible to guess whether such an operation is a read or a write. Therefore we
must manually mark the session as changed when manual SQL statements write
to the DB.
>>> session = Session()
>>> conn = session.connection()
>>> users = Base.metadata.tables['test_users']
>>> conn.execute(users.update(users.c.name=='bob'), name='ben')
<sqlalchemy.engine...ResultProxy object at ...>
>>> from zope.sqlalchemy import mark_changed
>>> mark_changed(session)
>>> transaction.commit()
>>> session = Session()
>>> str(session.query(User).all()[0].name)
'ben'
>>> transaction.abort()
It seems you aren't doing that, and so the transaction.commit does nothing.

How to execute raw SQL in Flask-SQLAlchemy app

How do you execute raw SQL in SQLAlchemy?
I have a python web app that runs on flask and interfaces to the database through SQLAlchemy.
I need a way to run the raw SQL. The query involves multiple table joins along with Inline views.
I've tried:
connection = db.session.connection()
connection.execute( <sql here> )
But I keep getting gateway errors.
Have you tried:
result = db.engine.execute("<sql here>")
or:
from sqlalchemy import text
sql = text('select name from penguins')
result = db.engine.execute(sql)
names = [row[0] for row in result]
print names
Note that db.engine.execute() is "connectionless", which is deprecated in SQLAlchemy 2.0.
SQL Alchemy session objects have their own execute method:
result = db.session.execute('SELECT * FROM my_table WHERE my_column = :val', {'val': 5})
All your application queries should be going through a session object, whether they're raw SQL or not. This ensures that the queries are properly managed by a transaction, which allows multiple queries in the same request to be committed or rolled back as a single unit. Going outside the transaction using the engine or the connection puts you at much greater risk of subtle, possibly hard to detect bugs that can leave you with corrupted data. Each request should be associated with only one transaction, and using db.session will ensure this is the case for your application.
Also take note that execute is designed for parameterized queries. Use parameters, like :val in the example, for any inputs to the query to protect yourself from SQL injection attacks. You can provide the value for these parameters by passing a dict as the second argument, where each key is the name of the parameter as it appears in the query. The exact syntax of the parameter itself may be different depending on your database, but all of the major relational databases support them in some form.
Assuming it's a SELECT query, this will return an iterable of RowProxy objects.
You can access individual columns with a variety of techniques:
for r in result:
print(r[0]) # Access by positional index
print(r['my_column']) # Access by column name as a string
r_dict = dict(r.items()) # convert to dict keyed by column names
Personally, I prefer to convert the results into namedtuples:
from collections import namedtuple
Record = namedtuple('Record', result.keys())
records = [Record(*r) for r in result.fetchall()]
for r in records:
print(r.my_column)
print(r)
If you're not using the Flask-SQLAlchemy extension, you can still easily use a session:
import sqlalchemy
from sqlalchemy.orm import sessionmaker, scoped_session
engine = sqlalchemy.create_engine('my connection string')
Session = scoped_session(sessionmaker(bind=engine))
s = Session()
result = s.execute('SELECT * FROM my_table WHERE my_column = :val', {'val': 5})
docs: SQL Expression Language Tutorial - Using Text
example:
from sqlalchemy.sql import text
connection = engine.connect()
# recommended
cmd = 'select * from Employees where EmployeeGroup = :group'
employeeGroup = 'Staff'
employees = connection.execute(text(cmd), group = employeeGroup)
# or - wee more difficult to interpret the command
employeeGroup = 'Staff'
employees = connection.execute(
text('select * from Employees where EmployeeGroup = :group'),
group = employeeGroup)
# or - notice the requirement to quote 'Staff'
employees = connection.execute(
text("select * from Employees where EmployeeGroup = 'Staff'"))
for employee in employees: logger.debug(employee)
# output
(0, 'Tim', 'Gurra', 'Staff', '991-509-9284')
(1, 'Jim', 'Carey', 'Staff', '832-252-1910')
(2, 'Lee', 'Asher', 'Staff', '897-747-1564')
(3, 'Ben', 'Hayes', 'Staff', '584-255-2631')
You can get the results of SELECT SQL queries using from_statement() and text() as shown here. You don't have to deal with tuples this way. As an example for a class User having the table name users you can try,
from sqlalchemy.sql import text
user = session.query(User).from_statement(
text("""SELECT * FROM users where name=:name""")
).params(name="ed").all()
return user
For SQLAlchemy ≥ 1.4
Starting in SQLAlchemy 1.4, connectionless or implicit execution has been deprecated, i.e.
db.engine.execute(...) # DEPRECATED
as well as bare strings as queries.
The new API requires an explicit connection, e.g.
from sqlalchemy import text
with db.engine.connect() as connection:
result = connection.execute(text("SELECT * FROM ..."))
for row in result:
# ...
Similarly, it’s encouraged to use an existing Session if one is available:
result = session.execute(sqlalchemy.text("SELECT * FROM ..."))
or using parameters:
session.execute(sqlalchemy.text("SELECT * FROM a_table WHERE a_column = :val"),
{'val': 5})
See "Connectionless Execution, Implicit Execution" in the documentation for more details.
result = db.engine.execute(text("<sql here>"))
executes the <sql here> but doesn't commit it unless you're on autocommit mode. So, inserts and updates wouldn't reflect in the database.
To commit after the changes, do
result = db.engine.execute(text("<sql here>").execution_options(autocommit=True))
This is a simplified answer of how to run SQL query from Flask Shell
First, map your module (if your module/app is manage.py in the principal folder and you are in a UNIX Operating system), run:
export FLASK_APP=manage
Run Flask shell
flask shell
Import what we need::
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)
from sqlalchemy import text
Run your query:
result = db.engine.execute(text("<sql here>").execution_options(autocommit=True))
This use the currently database connection which has the application.
Flask-SQLAlchemy v: 3.0.x / SQLAlchemy v: 1.4
users = db.session.execute(db.select(User).order_by(User.title.desc()).limit(150)).scalars()
So basically for the latest stable version of the flask-sqlalchemy specifically the documentation suggests using the session.execute() method in conjunction with the db.select(Object).
Have you tried using connection.execute(text( <sql here> ), <bind params here> ) and bind parameters as described in the docs? This can help solve many parameter formatting and performance problems. Maybe the gateway error is a timeout? Bind parameters tend to make complex queries execute substantially faster.
If you want to avoid tuples, another way is by calling the first, one or all methods:
query = db.engine.execute("SELECT * FROM blogs "
"WHERE id = 1 ")
assert query.first().name == "Welcome to my blog"

How to get inserted_primary_key from db.engine.connect().execute call

I'm using:
CPython 2.7.3,
Flask==0.10.1
Flask-SQLAlchemy==0.16
psycopg2==2.5.1
and
postgresql-9.2
Trying to get PK from insert call with alchemy.
Getting engine like so:
app = Flask(__name__)
app.config.from_envvar('SOME_VAR')
app.wsgi_app = ProxyFix(app.wsgi_app) # Fix for old proxyes
db = SQLAlchemy(app)
And executing insert query in app:
from sqlalchemy import text, exc
def query():
return db.engine.connect().execute(text('''
insert into test...'''), kw)
rv = query()
But trying access inserted_primary_key property, get:
InvalidRequestError: Statement is not an insert() expression construct.
How to enable implicit_returning in my case, reading the docs doesn't help?
You can use the RETURNING clause and handle this yourself:
INSERT INTO test (...) VALUES (...) RETURNING id
Then you can retrieve the id as you normally retrieve values from queries.
Note that this works on Postgres, but does not work on other db engines like MySQL or sqlite.
I don't think there is a db agnostic way to do this within SQLAlchemy without using the ORM functionality.
Is there any reason you do text query instead of normal sqlalchemy insert()? If you're using sqlalchemy it will probably be much easier for you to rephrase your query into:
from sqlalchemy import text, exc, insert
# in values you can put dictionary of keyvalue pairs
# key is the name of the column, value the value to insert
con = db.engine.connect()
ins = tablename.insert().values(users="frank")
res = con.execute(ins)
res.inserted_primary_key
[1]
This way sqlalchemy will do the binding for you.
You can use lastrowid
rv = query()
rv.lastrowid

Categories

Resources