SQLAlchemy not finding Postgres table connected with postgres_fdw - python

Please excuse any terminology typos, don't have a lot of experience with databases other than SQLite. I'm trying to replicate what I would do in SQLite where I could ATTACH a database to a second database and query across all the tables. I wasn't using SQLAlchemy with SQLite
I'm working with SQLAlchemy 1.0.13, Postgres 9.5 and Python 3.5.2 (using Anaconda) on Win7/54. I have connected two databases (on localhost) using postgres_fdw and imported a few of the tables from the secondary database. I can successfully manually query the connected table with SQL in PgAdminIII and from Python using psycopg2. With SQLAlchemy I've tried:
# Same connection string info that psycopg2 used
engine = create_engine(conn_str, echo=True)
class TestTable(Base):
__table__ = Table('test_table', Base.metadata,
autoload=True, autoload_with=engine)
# Added this when I got the error the first time
# test_id is a primary key in the secondary table
Column('test_id', Integer, primary_key=True)
and get the error:
sqlalchemy.exc.ArgumentError: Mapper Mapper|TestTable|test_table could not
assemble any primary key columns for mapped table 'test_table'
Then I tried:
insp = reflection.Inspector.from_engine(engine)
print(insp.get_table_names())
and the attached tables aren't listed (the tables from the primary database do show up). Is there a way to do what I am trying to accomplish?

In order to map a table SQLAlchemy needs there to be at least one column denoted as a primary key column. This does not mean that the column need actually be a primary key column in the eyes of the database, though it is a good idea. Depending on how you've imported the table from your foreign schema it may not have a representation of a primary key constraint, or any other constraints for that matter. You can work around this by either overriding the reflected primary key column in the Table instance (not in the mapped classes body), or better yet tell the mapper what columns comprise the candidate key:
engine = create_engine(conn_str, echo=True)
test_table = Table('test_table', Base.metadata,
autoload=True, autoload_with=engine)
class TestTable(Base):
__table__ = test_table
__mapper_args__ = {
'primary_key': (test_table.c.test_id, ) # candidate key columns
}
To inspect foreign table names use the PGInspector.get_foreign_table_names() method:
print(insp.get_foreign_table_names())

Building on sibling answer by #ilja.
When using the SQLAlchemy automap feature to automatically generate mapped classes and relationships from an existing database schema, I found that the __mapper_args__ solution didn't create the model.
This alternative method where you manually define the private key will correctly enable automap to create your model.
from sqlalchemy import Column, create_engine, Text
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.schema import Table
Base = automap_base()
engine = create_engine(conn_str, convert_unicode=True)
pk = Column('uid', Text, primary_key=True)
test_table = Table(
'test_table', Base.metadata, pk, autoload=True, autoload_with=engine
)
# Inspect postgres schema
Base.prepare(engine, reflect=True)
print(dict(Base.classes))
print(test_table)

Related

Multiple Postgres schema issue when using pandas to_sql with SQL Alchemy core

I am exporting a pandas df to postgres using the SA core. Here is the basic script:
engine = db.create_engine(f'postgresql://data-catalogue:{dbpwd}#postgres-data-catalogue-dev/data-catalogue')
metadata = db.MetaData(schema="abn")
eshc_underlyers = db.Table('eshc_underlyers', metadata,
db.Column('description', db.String),
db.Column('isin', db.String),
db.Column('ul_product', db.String, primary_key=True),
db.Column('reference_product', db.String),
db.Column('haircut_base', db.String, primary_key=True),
db.Column('base_cur',db.String),
db.Column('business_date', db.DateTime, primary_key=True),
db.Column('account', db.String, primary_key=True),
)
metadata.create_all(engine)
with engine.connect() as conn:
strUlDf.to_sql(name='eshc_underlyers', con=conn , if_exists='append', index = False)
When this runs it creates both an "abn" schema and a "public" schema, but the public one is not needed. Also when interrogating the DB the "abn" schema shows the correct composite key being applied, but the "public" schema has none applied at all. The effect of this is that I can run this same script over and over and it will ignore the constraints and allow the duplicates to load into the public schema. At the same time select * from abn.eshc_underlyers returns mothing. Alternatively if I remove schema="abn" the public default schema works correctly and constraints are observed, but ofc this is not what I need.
Not an expert in python or postgres so feeling my way a little here.
ok, figured this out. You have to pass the schema into BOTH the Metadata() statement AND the to_sql statement. In this case the public schema is not created at all.

(sqlite3.OperationalError) unknown database "seller"

I'm trying to use a testdb(sqlite) to make my tests, but when i use Base.metadata.createall() to create the same tables of the production database, i got this error: (sqlite3.OperationalError) unknown database "seller".
Conftest.py:
DATABASE_URL = 'sqlite:///testedb.sqlite'
#pytest.fixture(scope="function")
def client() -> Generator:
config_database(DATABASE_URL)
with TestClient(app) as c:
yield c
Database.py:
Base = declarative_base()
def config_database(database_url):
engine = create_engine(database_url)
Base.metadata.create_all(bind=engine)
Example of model there i'm using:
class Seller(Base):
__table__ = Table(
"seller",
Base.metadata,
Column(
"seller_id",
Integer,
primary_key=True,
index=True,
nullable=False),
Column("cnpj", String, nullable=True),
Column("nickname", String, nullable=False),
schema="seller")
Some database back-ends like PostgreSQL and MS SQL Server support the notion of a database containing multiple schemas, each of which can contain tables, views, stored procedures, etc.. If we are connected to a database named "my_db" then
SELECT * FROM seller.thing
means 'select rows from the table named "thing" in the schema named "seller" in the current database (my_db)'.
Other database back-ends like MySQL and SQLite do not support schemas within a database. Instead, they treat "schema" and "database" as synonyms, so
SELECT * FROM seller.thing
means 'select rows from the table named "thing" in the database named "seller", regardless of the current database'.
Therefore,
from sqlalchemy import create_engine, Column, Integer, Table, MetaData
engine = create_engine("sqlite:///data.db")
thing = Table(
"thing",
MetaData(),
Column("id", Integer, primary_key=True, autoincrement=False),
schema="seller",
)
engine.echo = True
thing.create(engine)
will fail with the error
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database seller
[SQL:
CREATE TABLE seller.thing (
id INTEGER NOT NULL,
PRIMARY KEY (id)
)
]
if the current SQLite database does not have an attached database named "seller".
That might be a bit confusing because in the above example the database "data.db" will be created automatically if it does not exist, but that happens when the code tries to establish a (DBAPI) connection to the database. The same "auto-create" behaviour does not occur when an SQL statement tries to refer to another database.
So, if you want to use a "schema" named "seller" in SQLite then you need to ATTACH it to the current database like so:
from sqlalchemy import create_engine, Column, event, Integer, Table, MetaData
engine = create_engine("sqlite:///data.db")
#event.listens_for(engine, "first_connect")
def schema_attach(dbapi_connection, connection_record):
dbapi_connection.execute("ATTACH DATABASE 'seller.db' AS seller")
thing = Table(
"thing",
MetaData(),
Column("id", Integer, primary_key=True, autoincrement=False),
schema="seller",
)
engine.echo = True
thing.create(engine)
(Note that in this case "seller.db" will be automatically created if it does not exist.)

Reflecting Oracle DB with SQLalchemy, doesn't reflect the columns [duplicate]

I'm a new learner on Python and SQLAlchemy, and I met a curious problem as below.
user = Table('users', meta, autoload=True, autoload_with=engine)
then I
print(user.columns)
it works fine, the output are user.ID, user.Name, etc. But then:
Session = sessionmaker(bind=engine)
session = Session()
session.query(user).order_by(user.id)
shows error:
AttributeError: 'Table' object has no attribute 'id'
I change the "id" to "Name", it's the same error.
I also tried the filter_by method, the same error.
Why this happened?
You could use:
session.query(user).order_by(user.c.id)
Since this is the top answer when you search for this error, I will drop my solution for this as well here.
For my problem, I had to reference the class instead of the table in my models remote_side
so it had to be "Transaction.id" instead of transactions.id
class Transaction(Base):
__tablename__ = "transactions"
...
offset_transaction = relationship(
...
remote_side="Transaction.id",
)
...
For people tumbling into this after many things have changed, this is a solution that works now.
I'm also a new to this, and for me all of those answers failed to work, but using select instead of query worked. Sorry for different table and column names, but this was from an actual query.
Using MSSQL database with ODCB driver.
from sqlalchemy import select, create_engine, MetaData, Table, URL
from sqlalchemy.orm import Session
connect_url = URL.create(
'mssql+pyodbc',
host = 'server',
port = 'port',
database = 'database',
query = dict(driver = 'SQL Server Native Client 11.0'))
engine = create_engine(connect_url)
connection = engine.connect()
metadata = MetaData()
reqgroups = Table('ReqGroup', metadata, autoload_with=engine)
# Print the column names. This works and shows REQGROUPID as one of the columns
print(reqgroups.columns.keys())
with Session(engine) as session:
# Filter with columns REQGROUPID
stmt = select(reqgroups).filter_by(REQGROUPID = "Tarve_ts")
ResultSet = session.execute(stmt).fetchone()
print(ResultSet)

Sqlalchemy if table does not exist

I wrote a module which is to create an empty database file
def create_database():
engine = create_engine("sqlite:///myexample.db", echo=True)
metadata = MetaData(engine)
metadata.create_all()
But in another function, I want to open myexample.db database, and create tables to it if it doesn't already have that table.
EG of the first, subsequent table I would create would be:
Table(Variable_TableName, metadata,
Column('Id', Integer, primary_key=True, nullable=False),
Column('Date', Date),
Column('Volume', Float))
(Since it is initially an empty database, it will have no tables in it, but subsequently, I can add more tables to it. Thats what i'm trying to say.)
Any suggestions?
I've managed to figure out what I intended to do. I used engine.dialect.has_table(engine, Variable_tableName) to check if the database has the table inside. IF it doesn't, then it will proceed to create a table in the database.
Sample code:
engine = create_engine("sqlite:///myexample.db") # Access the DB Engine
if not engine.dialect.has_table(engine, Variable_tableName): # If table don't exist, Create.
metadata = MetaData(engine)
# Create a table with the appropriate Columns
Table(Variable_tableName, metadata,
Column('Id', Integer, primary_key=True, nullable=False),
Column('Date', Date), Column('Country', String),
Column('Brand', String), Column('Price', Float),
# Implement the creation
metadata.create_all()
This seems to be giving me what i'm looking for.
Note that in 'Base.metadata' documentation it states about create_all:
Conditional by default, will not attempt to recreate tables already
present in the target database.
And if you can see that create_all takes these arguments: create_all(self, bind=None, tables=None, checkfirst=True), and according to documentation:
Defaults to True, don't issue CREATEs for tables already present in
the target database.
So if I understand your question correctly, you can just skip the condition.
The accepted answer prints a warning that engine.dialect.has_table() is only for internal use and not part of the public API. The message suggests this as an alternative, which works for me:
import os
import sqlalchemy
# Set up a connection to a SQLite3 DB
test_db = os.getcwd() + "/test.sqlite"
db_connection_string = "sqlite:///" + test_db
engine = create_engine(db_connection_string)
# The recommended way to check for existence
sqlalchemy.inspect(engine).has_table("BOOKS")
See also the SQL Alchemy docs.
For those who define the table first in some models.table file, among other tables.
This is a code snippet for finding the class that represents the table we want to create ( so later we can use the same code to just query it )
But together with the if written above, I still run the code with checkfirst=True
ORMTable.__table__.create(bind=engine, checkfirst=True)
models.table
class TableA(Base):
class TableB(Base):
class NewTableC(Base):
id = Column('id', Text)
name = Column('name', Text)
form
Then in the form action file:
engine = create_engine("sqlite:///myexample.db")
if not engine.dialect.has_table(engine, table_name):
# Added to models.tables the new table I needed ( format Table as written above )
table_models = importlib.import_module('models.tables')
# Grab the class that represents the new table
# table_name = 'NewTableC'
ORMTable = getattr(table_models, table_name)
# checkfirst=True to make sure it doesn't exists
ORMTable.__table__.create(bind=engine, checkfirst=True)
engine.dialect.has_table does not work for me on cx_oracle.
I am getting AttributeError: 'OracleDialect_cx_oracle' object has no attribute 'default_schema_name'
I wrote a workaround function:
from sqlalchemy.engine.base import Engine
def orcl_tab_or_view_exists(in_engine: Engine, in_object: str, in_object_name: str,)-> bool:
"""Checks if Oracle table exists in current in_engine connection
in_object: 'table' | 'view'
in_object_name: table_name | view_name
"""
obj_query = """SELECT {o}_name FROM all_{o}s WHERE owner = SYS_CONTEXT ('userenv', 'current_schema') AND {o}_name = '{on}'
""".format(o=in_object, on=in_object_name.upper())
with in_engine.connect() as connection:
result = connection.execute(obj_query)
return len(list(result)) > 0
This is the code working for me to create all tables of all model classes defined with Base class
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
class YourTable(Base):
__tablename__ = 'your_table'
id = Column(Integer, primary_key = True)
DB_URL="mysql+mysqldb://<user>:<password>#<host>:<port>/<db_name>"
scoped_engine = create_engine(DB_URL)
Base = declarative_base()
Base.metadata.create_all(scoped_engine)

SQLAlchemy alembic AmbiguousForeignKeysError for declarative type but not for equivalent non-declarative type

I have the following alembic migration:
revision = '535f7a49839'
down_revision = '46c675c68f4'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from datetime import datetime
Session = sessionmaker()
Base = declarative_base()
metadata = sa.MetaData()
# This table definition works
organisations = sa.Table(
'organisations',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('creator_id', sa.Integer),
sa.Column('creator_staff_member_id', sa.Integer),
)
"""
# This doesn't...
class organisations(Base):
__tablename__ = 'organisations'
id = sa.Column(sa.Integer, primary_key=True)
creator_id = sa.Column(sa.Integer)
creator_staff_member_id = sa.Column(sa.Integer)
"""
def upgrade():
bind = op.get_bind()
session = Session(bind=bind)
session._model_changes = {} # if you are using Flask-SQLAlchemy, this works around a bug
print(session.query(organisations).all())
raise Exception("don't succeed")
def downgrade():
pass
Now the query session.query(organisations).all() works when I use the imperatively-defined table (the one not commented out). But if I use the declarative version, which as far as I understand should be equivalent, I get an error:
sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join
condition between parent/child tables on relationship
StaffMember.organisation - there are multiple foreign key paths
linking the tables. Specify the 'foreign_keys' argument, providing a
list of those columns which should be counted as containing a foreign
key reference to the parent table.
Now I understand what this error means: I have two foreign keys from organisations to staff_members in my actual models. But why does alembic care about these, and how does it even know they exist? How does this migration know that something called StaffMember exists? As far as I understand, alembic should only know about the models you explicitly tell it about in the migration.
Turns out the problem was with my Flask-script setup I was using to call alembic. The command I was using to call alembic was importing the code to initialise my Flask app which was itself importing my actual models.

Categories

Resources