How to specify the PostgreSQL DateStyle property with SQLAlchemy ORM - python

PostgreSQL supports specifying Date Formats using the DateStyle Property as mentioned here,
http://www.postgresql.org/docs/current/interactive/runtime-config-client.html#GUC-DATESTYLE. (link was originally to 8.3 version of docs).
I could not find any SQLAlchemy ORM documentation reference on to how to define this property. Is it possible to do it?

SQLAlchemy makes use of the DBAPI, usually psycopg2, to marshal date values to and from python datetime objects - you can then format/parse any way you want using standard python techniques. So no database-side date formatting features are needed.
If you do want to set this variable, you can just execute PG's SET statement:
conn = engine.connect()
conn.execute("SET DateStyle='somestring'")
# work with conn
to make this global to all connections:
from sqlalchemy import event
from sqlalchemy.engine import Engine
#event.listens_for(Engine, "connect")
def connect(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("SET DateStyle='somestring'")
cursor.close()

Related

How to generate SQL using pandas without a database connection?

The pandas package have a method called .to_sql that help to insert the current data frame on to the database.
.to_sql doc:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html
The second parameter is con
sqlalchemy.engine.(Engine or Connection) or sqlite3.Connection
Is it possible to generate the SQL query without passing a database connection?
We actually cannot print the query without a database connection, but we can use sqlalchemy create_mock_engine method and pass "memory" as the database URI to trick pandas, e.g:
from sqlalchemy import create_mock_engine, Metadata
def dump(sql, *multiparams, **params):
print(sql.compile(dialect=engine.dialect))
engine = create_mock_engine("sqlite://:memory:", echo=True)
Metadata.create_all(engine, checkfirst=False)
frame.to_sql(engine)

How to initialize SQL Alchemy engine, session and table globally

I'm developing a python application where most of its functions will interact (create, read, update and delete) with a specific table in a MySQL database. I know that I can query this specific table with the following code:
engine = create_engine(
f"mysql+pymysql://{username}:{password}#{host}:{port}",
pool_pre_ping=True
)
meta = MetaData(engine)
my_table = Table(
'my_table',
meta,
autoload=True,
schema=db_name
)
dbsession = sessionmaker(bind=engine)
session = dbsession()
# example query to table
results = session.query(my_table).filter(my_table.columns.id >=1)
results.all()
However, I do not understand how to make these definitions (engine, meta, table, session) global to all of my functions. Should I define these things in my init.py and then pass them along as function arguments? Should I define a big class and initialize them during the class init?
My goal is to be able to query that table in any of my functions at any time without having to worry if the connection has gone away. According to the SQL Alchemy docs:
Just one time, somewhere in your application’s global scope. It should be looked upon as part of your application’s configuration. If your application has three .py files in a package, you could, for example, place the sessionmaker line in your init.py file; from that point on your other modules say “from mypackage import Session”. That way, everyone else just uses Session(), and the configuration of that session is controlled by that central point.
Ok, but what about the engine, table and meta? Do I need to worry about those?
If you are working with a single table then the reflected table instance (my_table) and the engine should be all you need to expose globally.
the metadata object (meta) not required for querying, but is available as my_table.metadata if required
sessions are not required because you do not appear to be using the SQLAlchemy ORM.
The engine maintains a pool of connections, which you can check out to run queries (don't forget to close them though). This example code uses context managers to ensure that transactions are committed and connections are closed:
# Check out a connection
with engine.connect() as conn:
# Start a transaction
with conn.begin():
q = select(my_table).where(my_table.c.id >= 1)
result = conn.execute(q)

Python - How to connect SQLAlchemy to existing database in memory

I'm creating my DB from an existing shema and it's stored in :memory:.
db = Database(filename=':memory:', schema='schema.sql')
db.recreate()
I now want to "link" this to SQL Alchemy. Followed different methods but could not get it right.
My current attempt stands as follow:
engine = create_engine('sqlite:///:memory:')
Base = automap_base()
Base.prepare(engine, reflect=True)
User = Base.classes.user
session = Session(engine)
Much like the other stuff I tried this will throw AttributeError: user.
How can I have this work together?
The relevant part of the documentation is here: https://sqlite.org/inmemorydb.html .
If you use :memory: then every connection will have its own memory database. The trick is to use a named in memory database with the URI format, like the following
import random
import string
import sqlite3
# creating a random name for the temporary memory DB
sqlite_shared_name = "test_db_{}".format(
random.sample(string.ascii_letters, k=4)
)
create_engine(
"sqlite:///file:{}?mode=memory&cache=shared&uri=true".format(
sqlite_shared_name))
the format is a URI as stated by the query string parameter uri=true (see SQLAlchemy documentation)
it is a memory DB with mode=memory
it can be shared among various connection with cache=shared
If you have another connection, then you can use more or less the same connection string. For instance, for getting the connection to that same DB in memory using python's sqlite module, you can drop the uri=true from the query string (and the dialect part sqlite:///) and pass it as argument:
dest = sqlite3.connect(
"file:{}?mode=memory&cache=shared".format(sqlite_shared_name),
uri=True)

Is there a SqlAlchemy database agnostic FROM_UNIXTIME() function?

Currently I have a query similar to the below in flask sqlalchemy:
from sqlalchemy.sql import func
models = (
Model.query
.join(ModelTwo)
.filter(Model.finish_time >= func.from_unixtime(ModelTwo.start_date))
.all()
)
This works fine with MySQL which I am running in production, however when I run tests against the method using an in-memory SqlLite database it fails because from_unixtime is not a SqlLite function.
Aside from the running tests on the same database as production as closely as possible issue and the fact that I have two different ways of representing data in the database, is there a database agnostic method in SqlAlchemy for handling the conversion of dates to unix timestamps and vice-versa?
For anyone else interested in this, I found a way to create custom functions in SqlAlchemy based on the SQL dialect being used. As such the below achieves what I need:
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
class convert_timestamp_to_date(expression.FunctionElement):
name = 'convert_timestamp_to_date'
#compiles(convert_timestamp_to_date)
def mysql_convert_timestamp_to_date(element, compiler, **kwargs):
return 'from_unixtime({})'.format(compiler.process(element.clauses))
#compiles(convert_timestamp_to_date, 'sqlite')
def sqlite_convert_timestamp_to_date(element, compiler, **kwargs):
return 'datetime({}, "unixepoch")'.format(compiler.process(element.clauses))
The query above can now be re-written as such:
models = (
Model.query
.join(ModelTwo)
.filter(Model.finish_time >= convert_timestamp_to_date(ModelTwo.start_date))
.all()
)

Can I use Psycopog2's LoggingConnection with SQLAlchemy?

I am using SQLAlchemy 0.9.7 over Postgres with psyopg2 as the driver.
I have a stray transaction that isn't being closed properly, and in order to debug it, I would like to log all of the operations being sent to the database.
The psycopg2.extras.LoggingConnection looks like it provides the functionality I need, but I can't see how I might persuade SQLAlchemy to use this feature of the dialect.
Is this possible via SQLAlchemy?
You could pass custom connection factory to SQLAlchemy engine:
def _connection_factory(*args, **kwargs):
connection = psycopg2.extras.LoggingConnection(*args, **kwargs)
connection.initialize(open('sql.log', 'a'))
return connection
db_engine = create_engine(conn_string,
connect_args={ "connection_factory": _connection_factory })
Alternatively, you could implement a custom cursor class (see psycopg2.extras.LoggingCursor for example), and pass it in a similar way:
connect_args={ "cursor_factory": MyCursor }
It isn't a direct answer to my own question, but a workaround: similar functionality can be obtained by turning on query logging at the SQLAlchemy layer, rather than the Psycopg2 layer:

Categories

Resources