Python SQLAlchemy INSERT after DELETE violates constraint - python

I have this pattern for deletion of all rows in a Postgresql table and subsequent insertion with SQLAlchemy:
db = create_engine("postgresql://...", echo=False).connect()
metadata = MetaData(db)
my_table = Table('my_table', metadata, autoload_with=db)
...
db.execute(my_table.delete())
db.execute(my_table.insert(), values)
where values is a list.
I can't uderstand why I get a psycopg2.errors.UniqueViolation when trying to insert.
The data which is inserted is not duplicated, so I guess the problem is that the delete is not committed?
I don't use a Session: what do I need to do to get this simple pattern working correctly?

I found the solution by completely disabling automatic SQLAlchemy transactions (which are not needed in my case of bulk deletions/insertions) with the supported DBAPI isolation_level="AUTOCOMMIT":
db = create_engine("postgresql://...", echo=False).connect().execution_options(isolation_level="AUTOCOMMIT")
See https://docs.sqlalchemy.org/en/14/core/connections.html#setting-transaction-isolation-levels-including-dbapi-autocommit

Related

SQLAlchemy cannot autoload an mssql temporary table

I'm not able to connect to temporary tables created on an SQL server using SQLAlchemy.
I connect to the server:
engine = create_engine(URL, poolclass=StaticPool)
I fill a temporary table with data from a pandas dataframe:
df_tmp.to_sql('#table_test', con=engine)
The table exists on the server:
res = engine.execute('SELECT * FROM tempdb..#table_test')
print(res)
which returns a list of tuples of my data. But then when I try to make an SQLAlchemy table it fails with a NoSuchTableError:
from sqlalchemy import create_engine, MetaData, Table
metadata = MetaData(engine)
metadata.create_all()
table = Table('#table_test', metadata, autoload=True, autoload_with=engine)
I also tried this, which gives the same error:
table = Table('tempdb..#table_test', metadata, autoload=True, autoload_with=engine)
And I also tried creating a blank table with an SQL command, which gives the same error when I try to read it with SQLAlchemy:
engine.execute('CREATE TABLE #table_test (id_number INT, name TEXT)')
Does SQLAlchemy support temporary tables? If so what is going wrong here? I'd like to have the temporary table as an sqlalchemy.schema.Table object if possible, as then it fits with all my other code.
(re: comments to the question)
Actually, it is a limitation of the current mechanism by which SQLAlchemy's mssql dialect checks for the existence of a table. It queries INFORMATION_SCHEMA.TABLES for the current catalog (database), and #temp tables do not appear in that view. They do appear — after a fashion, and in a not-particularly-helpful way — if we USE tempdb and then query INFORMATION_SCHEMA.TABLES from there.
For now, I have created a GitHub issue here to see if we can improve on this.
Update 2020-09-01
The changes for the above GitHub issue have been merged into SQLAlchemy's master branch and will be included in version 1.4. If you want to take advantage of this feature before 1.4 is officially released you can install SQLAlchemy via
pip install git+https://github.com/sqlalchemy/sqlalchemy.git

SQLAlchemy doesn't recognize new entries in query

I'm querying the latest entry from a table like this:
data = dbsession.query(db.mytable).order_by(db.mytable.timestamp.desc()).with_entities(db.mytable.timestamp).first()
On startup this is fine, but if new etries are added by the same dbsession during runtime, the query above doesn't recognize them.
But the following code without SQLAlchemy works as expected:
sql_query="SELECT timestamp FROM mytable ORDER BY timestamp DESC LIMIT 1"
data = cursor.execute(sql_query)
How do I get SQLAlchemy to work in this case?
Had a similar issue once, without recalling exactly why sqlAlchemy behaves this way, you need to commit the session before the select to refresh the data:
session.commit()

How to get base from existing sql DDL file?

I'm using SQLAlchemy for MySQL.
The common example of SQLAlchemy is
Defining model classes by the table structure. (class User(Base))
Migrate to the database by db.create_all (or alembic, etc)
Import the model class, and use it. (db.session.query(User))
But what if I want to use raw SQL file instead of defined model classes?
I did read automap do similar like this, but I want to get mapper object from raw SQL file, not created database.
Is there any best practice to do this?
This is an example of DDL
-- ddl.sql
-- This is just an example, so please ignore some issues related to a grammar
CREATE TABLE `card` (
`card_id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'card',
`card_company_id` bigint(20) DEFAULT NULL COMMENT 'card_company_id',
PRIMARY KEY (`card_id`),
KEY `card_ix01` (`card_company_id`),
KEY `card_ix02` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='card table'
And I want to do like
Base = raw_sql_base('ddl.sql') # Some kinda automap_base but from SQL file
# engine, suppose it has two tables 'user' and 'address' set up
engine = create_engine("mysql://user#localhost/program")
# reflect the tables
Base.prepare(engine)
# mapped classes are now created with names by sql file
Card = Base.classes.card
session = Session(engine)
session.add(Card(card_id=1, card_company_id=1))
session.commit() # Insert
SQLAlchemy is not an SQL parser, but the exact opposite; its reflection works against existing databases only. In other words you must execute your DDL and then use reflection / automap to create the necessary Python models:
from sqlalchemy.ext.automap import automap_base
# engine, suppose it has two tables 'user' and 'address' set up
engine = create_engine("mysql://user#localhost/program")
# execute the DDL in order to populate the DB
with open('ddl.sql') as ddl:
engine.execute(ddl)
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by sql file
Card = Base.classes.card
session = Session(engine)
session.add(Card(card_id=1, card_company_id=1))
session.commit() # Insert
This of course may fail, if you have already executed the same DDL against your database, so you would have to handle that case as well. Another possible caveat is that some DB-API drivers may not like executing multiple statements at a time, if your ddl.sql happens to contain more than one CREATE TABLE statement etc.
...but I want to get mapper object from raw SQL file.
Ok, in that case what you need is the aforementioned parser. A cursory search produced two candidates:
sqlparse: Generic, but the issue tracker is a testament to how nontrivial parsing SQL is. Is often confused, for example parses ... COMMENT 'card', `card_company_id` ... as a keyword and an identifier list, not as a keyword, a literal, punctuation, and an identifier (or even better, the column definitions as their own nodes).
mysqlparse: A MySQL specific solution, but with limited support for just about anything, and it seems abandoned.
Parsing would be just the first step, though. You'd then have to convert the resulting trees to models.

Can I somehow query all the existing tables in peewee / postgres?

I am writing a basic gui for a program which uses Peewee. In the gui, I would like to show all the tables which exist in my database.
Is there any way to get the names of all existing tables, lets say in a list?
Peewee has the ability to introspect Postgres, MySQL and SQLite for the following types of schema information:
Table names
Columns (name, data type, null?, primary key?, table)
Primary keys (column(s))
Foreign keys (column, dest table, dest column, table)
Indexes (name, sql*, columns, unique?, table)
You can get this metadata using the following methods on the Database class:
Database.get_tables()
Database.get_columns()
Database.get_indexes()
Database.get_primary_keys()
Database.get_foreign_keys()
So, instead of using a cursor and writing some SQL yourself, just do:
db = PostgresqlDatabase('my_db')
tables = db.get_tables()
For even more craziness, check out the reflection module, which can actually generate Peewee model classes from an existing database schema.
To get a list of the tables in your schema, make sure that you have established your connection and cursor and try the following:
cursor.execute("SELECT table_name FROM information_schema.tables WHERE table_schema='public'")
myables = cursor.fetchall()
mytables = [x[0] for x in mytables]
I hope this helps.

How to get SqlAlchemy Table to read "implicit" schema

Using table creation as normal:
t = Table(name, meta, [columns ...])
This is the first run where I create the table. In future executions I would like to use the table without having to indicate the [columns]. This seems redundant as it should already be specified in the table schema. In other words, for future accesses, I'd like to simply do:
t = Table(name, meta) # columns already read from schema
Is there a way to do this in SqlAlchemy?
See Reflecting Database Objects of SA documentation:
t = Table(name, meta, autoload=True)#, autoload_with=engine)

Categories

Resources