Python SQLAlchemy PostgreSQL Deprecated API features - python

I am using following code to create the function and trigger to update the created_at and updated_at fields. with upgrade of new module getting the deprecated API warning.
How can I replace engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema))) line to remove the warning message?
Code:
mapper_registry.metadata.create_all(engine, checkfirst=True)
create_refresh_updated_at_func = """
CREATE OR REPLACE FUNCTION {schema}.refresh_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
"""
my_schema = "public"
engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema)))
Warrning:
RemovedIn20Warning: Deprecated API features detected! These
feature(s) are not compatible with SQLAlchemy 2.0. To prevent
incompatible upgrades prior to updating applications, ensure
requirements files are pinned to "sqlalchemy<2.0". Set environment
variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set
environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this
message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema)))

SQLAlchemy no longer supports autocommit at the library level. You need to run the execute within a transaction.
This should work:
with engine.begin() as conn:
conn.execute(text(create_refresh_updated_at_func.format(schema=my_schema)))
migration-core-connection-transaction
You could also use the driver-level isolation level like this but I think the connections from this pool will all be set to autocommit:
engine2 = create_engine(f"postgresql+psycopg2://{username}:{password}#/{db}", isolation_level='AUTOCOMMIT')
with engine2.connect() as conn:
conn.execute(text(create_refresh_updated_at_func.format(schema=my_schema)))

Related

Peewee Sqlite Auto Vacuum

I can't set auto_vacuum option via peewee.
Run this snippet:
from playhouse.sqlite_ext import SqliteExtDatabase, Model, TextField, IntegerField, JSONField
db = SqliteExtDatabase('my_app.db', pragmas=(('cache_size', -1024 * 64), ('journal_mode', 'wal'), ('auto_vacuum', 1)))
class EventLog(Model):
key = TextField()
value = JSONField()
class Meta:
database = db
EventLog.create_table()
After that:
I've connected to sqlite database
sqlite3 my_app.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> PRAGMA auto_vacuum;
0
sqlite> PRAGMA journal_mode;
wal
Why auto_vacuum variable doesn't change?
Perhaps this snippet from the sqlite Pragma doc [emphasis added] explains what is happening
The database connection can be changed between full and incremental
autovacuum mode at any time. However, changing from "none" to "full"
or "incremental" can only occur when the database is new (no tables
have yet been created) or by running the VACUUM command. To change
auto-vacuum modes, first use the auto_vacuum pragma to set the new
desired mode, then invoke the VACUUM command to reorganize the entire
database file. To change from "full" or "incremental" back to "none"
always requires running VACUUM even on an empty database.**

sqlalchemy looking for server version as string, instead of bytes-like object

I don't know what suddenly caused this (I recently reinstalled Anaconda and all my python libraries, but I am back to the same versions as before), but when sqlalchemy tries to connect to the SQL server, it fails because it looks for the server version and tries to run a string operation on it.
The following had no problems prior to my reinstall of packages. I'd connect like so:
sqlalchemy_conn_string = 'mssql+pyodbc://myDSN'
sqlalchemy.create_engine(sqlalchemy_conn_string, module=pypyodbc)
Then it gets all the way to a file called pyodbc.py and fails at this function:
def _get_server_version_info(self, connection):
try:
raw = connection.scalar("SELECT SERVERPROPERTY('ProductVersion')")
except exc.DBAPIError:
#...
else:
version= []
r = re.compile(r'[.\-]')
for n in r.split(raw): # culprit here
try:
version.append(int(n))
except ValueError:
version.append(n)
return tuple(version)
Out[1]: TypeError: cannot use a string pattern on a bytes-like object
That's because at this step, raw is not a string that can be split:
# from PyCharm's debugger window
raw = {bytes}b'13.0.5026.0'
At this point, I don't know if I should submit a bug report for sqlalchemy and/or pypyodbc, or if there's something I can do to fix this myself. But I'd like a solution that doesn't involve editing the code for sqlalchemy on my own machine (like handling the bytes-like object specifically), because we have other team members who will also be downloading vanilla sqlalchemy & pypyodbc and won't have the confidence in editing that source code.
I have confirmed the pypyodbc behaviour under Python 3.6.4.
print(pypyodbc.version) # 1.3.5
sql = """\
SELECT SERVERPROPERTY('ProductVersion')
"""
crsr.execute(sql)
x = crsr.fetchone()[0]
print(repr(x)) # b'12.0.5207.0'
Note that SQLAlchemy's mssql+pyodbc dialect is coded for pyodbc, not pypyodbc, and the two are not guaranteed to be 100% compatible.
The obvious solution would be to use pyodbc instead.
UPDATE:
Check your version of SQLAlchemy. I just looked at the current source code for the mssql+pyodbc dialect and it does
def _get_server_version_info(self, connection):
try:
# "Version of the instance of SQL Server, in the form
# of 'major.minor.build.revision'"
raw = connection.scalar(
"SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)")
which should avoid the issue, even when using pypyodbc.
If you are using the latest production release of SQLAlchemy (currently version 1.2.15), then you might have better luck with version 1.3.0b1.

run_async_query in Python gcloud BigQuery using Standard SQL instead of Legacy SQL

I need to run an async query using the gcloud python BigQuery library. Furthermore I need to run the query using the beta standard sql instead of the default legacy sql.According to the documentation here, here, and here I believe I should be able to just set the use_legacy_sql property on the job to False. However, this still results in an error due to the query being processed against Legacy SQL. How do I successfully use this property to indicate which SQL standard I want the query to be processed with?
Example Python code below:
stdz_table = stdz_dataset.table('standardized_table1')
job_name = 'asyncjob-test'
query = """
SELECT TIMESTAMP('2016-03-30 10:32:15', 'America/Chicago') AS special_date
FROM my_dataset.my_table_20160331;
"""
stdz_job = bq_client.run_async_query(job_name,query)
stdz_job.use_legacy_sql = False
stdz_job.allow_large_results = True
stdz_job.create_disposition = 'CREATE_IF_NEEDED'
stdz_job.destination = stdz_table
stdz_job.write_disposition = 'WRITE_TRUNCATE'
stdz_job.begin()
# wait for job to finish
while True:
stdz_job.reload()
if stdz_job.state == 'DONE':
# print use_legacy_sql value, and any errors (will be None if job executed successfully)
print stdz_job.use_legacy_sql
print json.dumps(stdz_job.errors)
break
time.sleep(1)
This outputs:
False
[{"reason": "invalidQuery", "message": "2.20 - 2.64: Bad number of arguments. Expected 1 arguments.", "location": "query"}]
which is the same error you'd get if you ran it in the BigQuery console using Legacy SQL. When I copy paste the query in BigQuery console and run it using Standard SQL, it executes fine. Note: The error location (2.20 - 2.64) might not be exactly correct for the query above since it is a sample and I have obfuscated some of my personal info in it.
The use_legacy_sql property did not exist as of version 0.17.0, so you'd have needed to check out the current master branch. However it does now exist of as release 0.18.0 so after upgrading gcloud-python via pip you should be good to go.

how to set autocommit = 1 in a sqlalchemy.engine.Connection

In sqlalchemy, I make the connection:
conn = engine.connect()
I found this will set autocommit = 0 in my mysqld log.
Now I want to set autocommit = 1 because I do not want to query in a transaction.
Is there a way to do this?
From The SQLAlchemy documentation: Understanding autocommit
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The “autocommit” feature is only in effect when no Transaction has otherwise been declared. This means the feature is not generally used with the ORM, as the Session object by default always maintains an ongoing Transaction.
Full control of the “autocommit” behavior is available using the generative Connection.execution_options() method provided on Connection, Engine, Executable, using the “autocommit” flag which will turn on or off the autocommit for the selected scope. For example, a text() construct representing a stored procedure that commits might use it so that a SELECT statement will issue a COMMIT:
engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
What is your dialect for mysql connection?
You can set the autocommit to True to solve the problem, like this mysql+mysqldb://user:password#host:port/db?charset=foo&autocommit=true
You can use this:
from sqlalchemy.sql import text
engine = create_engine(host, user, password, dbname)
engine.execute(text(sql).execution_options(autocommit=True))
In case you're configuring sqlalchemy for a python application using flask / django, you can create the engine like this:
# Configure the SqlAlchemy part of the app instance
app.config['SQLALCHEMY_DATABASE_URI'] = conn_url
session_options = {
'autocommit': True
}
# Create the SqlAlchemy db instance
db = SQLAlchemy(app, session_options=session_options)
I might be little late here, but for fox who is using sqlalchemy >= 2.0.*, above solution might not work as it did not work for me.
So, I went through the official documentation, and below solution worked for me.
from sqlalchemy import create_engine
db_engine = create_engine(database_uri, isolation_level="AUTOCOMMIT")
Above code works if you want to set autocommit engine wide.
But if you want use autocommit for a particular query then you can use below -
with engine.connect().execution_options(isolation_level="AUTOCOMMIT") as connection:
with connection.begin():
connection.execute("<statement>")
Official Documentation
This can be done using the autocommit option in the execution_option() method:
engine.execute("UPDATE table SET field1 = 'test'").execution_options(autocommit=True))
This information is available within the documentation on Autocommit

Database change with software update

I've got a program which has a database with a certain schema, v0.1.0.
In my next version (v0.1.1) I've made changes to the database schema.
So when I update to (0.1.1) I would like those changes to take effect without effecting the original data in (0.1.0) and subsequent versions.
How do I go about making the changes without affecting (0.1.0) database data and keeping track of those changes in subsequent versions?
I'm using python with sqlite3.
Update
There is no support for multiple versions of the software. The database is dependent on the verion that you are using.
The is no concurrent access to the database, 1 database per version.
So the user can use the old version but when they upgrade to the new version the .sqlite schema will be changed.
Track the schema version in the database with the user_version PRAGMA, and keep a sequence of upgrade steps:
def get_schema_version(conn):
cursor = conn.execute('PRAGMA user_version')
return cursor.fetchone()[0]
def set_schema_version(conn, version):
conn.execute('PRAGMA user_version={:d}'.format(version))
def initial_schema(conn):
# install initial schema
# ...
def ugrade_1_2(conn):
# modify schema, alter data, etc.
# map target schema version to upgrade step from previous version
upgrade_steps = {
1: initial_schema,
2: ugrade_1_2,
}
def check_upgrade(conn):
current = get_schema_version(conn)
target = max(upgrade_steps)
for step in range(current + 1, target + 1):
if step in upgrade_steps:
upgrade_steps[step](conn)
set_schema_version(conn, step)
There are a few ways to do this, I will only mention one.
It seems like you already track the version within the database. On your application start, you will want to check this version against the running application's version and run any sql scripts that will perform the schema changes.
update
An example of this in action:
import os
import sqlite3 as sqlite
def make_movie_table(cursor):
cursor.execute('CREATE TABLE movies(id INTEGER PRIMARY KEY, title VARCHAR(20), link VARCHAR(20))')
def make_series_table(cursor):
cursor.execute('CREATE TABLE series(title VARCHAR(30) PRIMARY KEY,series_link VARCHAR(60),number_of_episodes INTEGER,number_of_seasons INTEGER)')
def make_episode_table(cursor):
cursor.execute('CREATE TABLE episodes(id INTEGER PRIMARY KEY,title VARCHAR(30),episode_name VARCHAR(15), episode_link VARCHAR(40), Date TIMESTAMP, FOREIGN KEY (title) REFERENCES series(title) ON DELETE CASCADE)')
def make_version_table(cursor):
cursor.execute('CREATE TABLE schema_versions(version VARCHAR(6))')
cursor.execute('insert into schema_versions(version) values ("0.1.0")')
def create_database(sqlite_file):
if not os.path.exists(sqlite_file):
connection = sqlite.connect(sqlite_file)
cursor = connection.cursor()
cursor.execute("PRAGMA foreign_keys = ON")
make_movie_table(cursor)
make_series_table(cursor)
make_episode_table(cursor)
make_version_table(cursor)
connection.commit()
connection.close()
def upgrade_database(sqlite_file):
if os.path.exists(sqlite_file):
connection = sqlite.connect(sqlite_file)
cursor = connection.cursor()
cursor.execute("select max(version) from schema_versions")
row = cursor.fetchone()
database_version = row[0]
print('current version is %s' % database_version)
if database_version == '0.1.0':
print('upgrading version to 0.1.1')
cursor.execute('alter table series ADD COLUMN new_column1 VARCHAR(10)')
cursor.execute('alter table series ADD COLUMN new_column2 INTEGER')
cursor.execute('insert into schema_versions(version) values ("0.1.1")')
#if database_version == '0.1.1':
#print('upgrading version to 0.1.2')
#etc cetera
connection.commit()
connection.close()
#I need to add 2 columns to the series table, when the user upgrade the software.
if __name__ == '__main__':
create_database('/tmp/db.sqlite')
upgrade_database('/tmp/db.sqlite')
Each upgrade script will take care of making your database changes, and then update the version inside the DB to the latest version. Note that we do not use elif statements, this is so that you can upgrade a database over multiple versions if it needs to.
There are some caveats to watch out for:
run upgrades inside transactions, you will want to rollback on any errors to avoid leaving the database in an unusable state. -- update this is incorrect as pointed out, thanks Martijn!
avoid renames and column deletes if you can, and if you must, ensure any views using them are updated too.
Long term you're going to be better off using an ORM such as SQLAlchemy with a migration tool like Alembic.

Categories

Resources