I want to create a new table with my alembic migration file and add 2 records on the table. My upgrade() function
def upgrade():
op.create_table(
'new_table',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('text', sa.String(length=180), nullable=False),
sa.PrimaryKeyConstraint('id', name='pk_new_table'),
sa.UniqueConstraint('text', name='uq_new_table__text'),
)
# Create ad-hoc table as a helper
new_table = sa.table(
'new_table',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('text', sa.String(length=180), nullable=False),
sa.PrimaryKeyConstraint('id', name='pk_new_table'),
sa.UniqueConstraint('text', name='uq_new_table__text'),
)
op.bulk_insert(new_table, [
{'text': 'First'},
{'text': 'Second'}
])
When running the upgrade through alembic I get the following error
AttributeError: 'PrimaryKeyConstraint' object has no attribute 'key'
This error begins from the line that contains the sa.UniqueConstraint('text', name='uq_new_table__text'), on the SqlAlchemy table. What could be wrong?
Database backend on which the migration is applied is MySQL server.
Related
I am exporting a pandas df to postgres using the SA core. Here is the basic script:
engine = db.create_engine(f'postgresql://data-catalogue:{dbpwd}#postgres-data-catalogue-dev/data-catalogue')
metadata = db.MetaData(schema="abn")
eshc_underlyers = db.Table('eshc_underlyers', metadata,
db.Column('description', db.String),
db.Column('isin', db.String),
db.Column('ul_product', db.String, primary_key=True),
db.Column('reference_product', db.String),
db.Column('haircut_base', db.String, primary_key=True),
db.Column('base_cur',db.String),
db.Column('business_date', db.DateTime, primary_key=True),
db.Column('account', db.String, primary_key=True),
)
metadata.create_all(engine)
with engine.connect() as conn:
strUlDf.to_sql(name='eshc_underlyers', con=conn , if_exists='append', index = False)
When this runs it creates both an "abn" schema and a "public" schema, but the public one is not needed. Also when interrogating the DB the "abn" schema shows the correct composite key being applied, but the "public" schema has none applied at all. The effect of this is that I can run this same script over and over and it will ignore the constraints and allow the duplicates to load into the public schema. At the same time select * from abn.eshc_underlyers returns mothing. Alternatively if I remove schema="abn" the public default schema works correctly and constraints are observed, but ofc this is not what I need.
Not an expert in python or postgres so feeling my way a little here.
ok, figured this out. You have to pass the schema into BOTH the Metadata() statement AND the to_sql statement. In this case the public schema is not created at all.
I'm trying to use a testdb(sqlite) to make my tests, but when i use Base.metadata.createall() to create the same tables of the production database, i got this error: (sqlite3.OperationalError) unknown database "seller".
Conftest.py:
DATABASE_URL = 'sqlite:///testedb.sqlite'
#pytest.fixture(scope="function")
def client() -> Generator:
config_database(DATABASE_URL)
with TestClient(app) as c:
yield c
Database.py:
Base = declarative_base()
def config_database(database_url):
engine = create_engine(database_url)
Base.metadata.create_all(bind=engine)
Example of model there i'm using:
class Seller(Base):
__table__ = Table(
"seller",
Base.metadata,
Column(
"seller_id",
Integer,
primary_key=True,
index=True,
nullable=False),
Column("cnpj", String, nullable=True),
Column("nickname", String, nullable=False),
schema="seller")
Some database back-ends like PostgreSQL and MS SQL Server support the notion of a database containing multiple schemas, each of which can contain tables, views, stored procedures, etc.. If we are connected to a database named "my_db" then
SELECT * FROM seller.thing
means 'select rows from the table named "thing" in the schema named "seller" in the current database (my_db)'.
Other database back-ends like MySQL and SQLite do not support schemas within a database. Instead, they treat "schema" and "database" as synonyms, so
SELECT * FROM seller.thing
means 'select rows from the table named "thing" in the database named "seller", regardless of the current database'.
Therefore,
from sqlalchemy import create_engine, Column, Integer, Table, MetaData
engine = create_engine("sqlite:///data.db")
thing = Table(
"thing",
MetaData(),
Column("id", Integer, primary_key=True, autoincrement=False),
schema="seller",
)
engine.echo = True
thing.create(engine)
will fail with the error
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database seller
[SQL:
CREATE TABLE seller.thing (
id INTEGER NOT NULL,
PRIMARY KEY (id)
)
]
if the current SQLite database does not have an attached database named "seller".
That might be a bit confusing because in the above example the database "data.db" will be created automatically if it does not exist, but that happens when the code tries to establish a (DBAPI) connection to the database. The same "auto-create" behaviour does not occur when an SQL statement tries to refer to another database.
So, if you want to use a "schema" named "seller" in SQLite then you need to ATTACH it to the current database like so:
from sqlalchemy import create_engine, Column, event, Integer, Table, MetaData
engine = create_engine("sqlite:///data.db")
#event.listens_for(engine, "first_connect")
def schema_attach(dbapi_connection, connection_record):
dbapi_connection.execute("ATTACH DATABASE 'seller.db' AS seller")
thing = Table(
"thing",
MetaData(),
Column("id", Integer, primary_key=True, autoincrement=False),
schema="seller",
)
engine.echo = True
thing.create(engine)
(Note that in this case "seller.db" will be automatically created if it does not exist.)
I have a table called Users which has some columns and im trying to add an after_update hook on it (to be able to save history of the table in another table) but it's not working:
Users = Table(
"users",
metadata,
Column("id", Integer, primary_key=True),
Column("created_at", DateTime, default=func.now()),
Column("updated_at", DateTime, default=func.now(), onupdate=func.now()),
)
#event.listens_for(Users, "after_update")
def receive_after_update(mapper, connection, target):
print("updating")
The error im getting is:
sqlalchemy.exc.InvalidRequestError: No such event
'after_update' for target 'users'
I'm trying to connect Foreigner Database to a python-Flask app using Flask_SQLALchemy
I looked everywhere including the FLASK_SQLALCHEMY official doc
I've been looking everywhere over the internet for the past 4 days for any tutorial that feature's FLASK_SQLALCHEMY Library in ORM without a luck
i kept looking over the SQLAlchemy Reflecting doc but I got confused on decide-ding what to do next
here is my flask code:
Base = automap_base()
xDB = 'postgres://****'
engine = db.create_engine(xDB)
metadata = db.MetaData()
Session = db.sessionmaker(bind=engine)
session = db.scoped_session(engine)
Base.metadata.reflect(engine)
y = [db.Table(table,metadata,autoload=True,autoload_with=engine) for
table in engine.table_names()]
I've tried to query in many different ways based on what I've read over many sources none of them worked with Flask_SQLAlchemy
attempt #1 :
t = db.select('test1').limit(10)
engine.execute(t).fetchall()
output :
t = SELECT test1.id, test1.name
FROM test1
LIMIT :param_1
attempt #2 :
t = db.session.query([test1])
output:
sqlalchemy.exc.InvalidRequestError: SQL expression, column, or mapped entity expected - got '[Table('test1', MetaData(bind=None), Column('id', INTEGER(), table=, nullable=False, server_default=DefaultClause(, for_update=False)), Column('name', VARCHAR(), table=, nullable=False), schema=None)]'
i thought it's already mapped ..since autoLoad = True , Base = automap_base() and Base.metadata.reflect(engine)
attempt #3 :
t = metadata.tables['test1']
output:
KeyError: Table('test1', MetaData(bind=None), Column('id', INTEGER(), table=, nullable=False, server_default=DefaultClause(, for_update=False)), Column('name', VARCHAR(), table=, nullable=False), schema=None)
what i don't understand their metadata is already defined up as metadata = db.MetaData()
I can't find anything for Flask_SQLAlchemy old or new but i can see some resources for SQLAlchemy that doesn't work for Flask_SQLAlchemy Library , could someone help ?
don't use the Flask_SQLAlchemy and use the regular SQLAlchemy
Here is my current code:
def init_model(engine):
global t_user
t_user = sa.Table("User", meta.metadata,
sa.Column("id", sa.types.Integer, primary_key=True),
sa.Column("name", sa.types.String(100), nullable=False),
sa.Column("first_name", sa.types.String(100), nullable=False),
sa.Column("last_name", sa.types.String(100), nullable=False),
sa.Column("email", sa.types.String(100), nullable=False),
sa.Column("password", sa.types.String, nullable=False),
autoload=True,
autoload_with=engine
)
orm.mapper(User, t_user)
meta.Session.configure(bind=engine)
meta.Session = orm.scoped_session(sm)
meta.engine = engine
I then try to execute:
>>> meta.metadata.create_all(bind=meta.engine)
And receive the error:
raise exc.UnboundExecutionError(msg)
sqlalchemy.exc.UnboundExecutionError: The MetaData is not bound to an Engine or Connection. Execution can not proceed without a database to execute against. Either execute with an explicit connection or assign the MetaData's .bind to enable implicit execution.
In my development.ini I have:
# SQLAlchemy database URL
sqlalchemy.url = sqlite:///%(here)s/development.db
I'm new to Python's pylons and have no idea how to resolve this message. This is probably an easy fix to the trained eye. Thank you.
This issue was resolved. I didn't know that when using pylons from the CLI, I have to include the entire environment:
from paste.deploy import appconfig
from pylons import config
from project.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
from project.model import *
After this the database queries executed without a problem.