I am using flask with alembic and i have the two tables below linked by a Foreign key constraint:
table_one = Table("table_one", meta.Base.metadata,
Column("id", BigInteger, primary_key=True),
Column("filename", VARCHAR(65535)),
Column("mission", VARCHAR(65535)),
)
table_two = Table("table_two", meta.Base.metadata,
Column("id", BigInteger, primary_key=True),
Column("file_id", BigInteger, ForeignKey("table_one.id")),
Column("username", ArrowType(timezone=True)),
I am trying to get rid of table_one with the alembic revision below
def upgrade():
op.drop_table('table_one')
op.drop_constraint('table_two_id_fkey', 'table_two', type_='foreignkey')
op.drop_column('table_two', 'file_id')
op.drop_column('table_two', 'id')
def downgrade():
op.add_column('table_two', sa.Column('id', sa.BIGINT(), autoincrement=True, nullable=False))
op.add_column('table_two', sa.Column('file_id', sa.BIGINT(), autoincrement=False, nullable=True))
op.create_foreign_key('table_two_file_id_fkey', 'table_two', 'table_one', ['file_id'], ['id'])
op.create_table('table_one',
sa.Column('id', sa.BIGINT(), autoincrement=True, nullable=False),
sa.Column('filename', sa.VARCHAR(length=65535), autoincrement=False, nullable=True),
sa.Column('mission', sa.VARCHAR(length=65535), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id', name='table_one_pkey')
)
but unfortunately there seem to be an issue with the cascade and i am facing the error below:
psycopg2.errors.DependentObjectsStillExist: cannot drop table table_one because other objects depend on it
DETAIL: constraint table_two_file_id_fkey on table table_tow depends on table table_one
HINT: Use DROP ... CASCADE to drop the dependent objects too.
Does anyone have an idea how on to solve this issue?
Incase anyone is trying to drop a foreign key as the question title says:
You can remove a foreign key using the drop_constraint() function in alembic
op.drop_constraint(constraint_name="FK_<target>_<source>", table_name="<source>")
Related
I'm trying to do internationalization, and I've encountered one thing I can't seem to figure out. Fair to say I am total novice using SQLAlchemy (coming from Django world).
I am using SQL Alchemy Core (v.1.4.36). (PostgreSQL) (async sessions). Let's assume I have the following tables:
categories = Table(
'category',
catalog_metadata,
Column('id', Integer, primary_key=True, autoincrement=True)
)
category_translation = Table(
'category_translation',
catalog_metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('name', String(50), nullable=False),
Column('language', String(2), nullable=False),
Column('original_id', Integer, ForeignKey('category.id', ondelete="CASCADE"))
)
product = Table(
'product',
catalog_metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('category_id', Integer, ForeignKey('category.id', ondelete="CASCADE")),
nullable=True),
)
product_translation = Table(
'product_translation',
catalog_metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('language', String(2), nullable=False),
Column('original_id', Integer, ForeignKey('product.id', ondelete="CASCADE")),
Column('name', String(50), nullable=False),
Column('description', Text)
)
Explaining in case is not obvious: I have two main tables category and product. Each one of them has "translatable" fields that are being exposed in the secondary tables category_translation and product_translation respectively. The main goal behind this is, based on specific language, retrieve from the DB information based on the requested language and load it on a Category and Product class. Mapper defined next:
mapper.map_imperatively(
model.Category,
categories,
properties={
'products': relationship(model.Product, backref="category"),
'translations': relationship(
model.CategoryTranslation,
backref="category",
collection_class=attribute_mapped_collection('language')
)
},
)
mapper.map_imperatively(model.CategoryTranslation, category_translation)
mapper.map_imperatively(model.ProductTranslation, product_translation)
mapper.map_imperatively(
model.Product,
product,
properties={
'translations': relationship(
model.ProductTranslation,
backref="product",
collection_class=attribute_mapped_collection('language')
)
},
)
The implementation for the model classes is irrelevant, but you can assume it has the needed fields defined. If you must know, I am using FastAPI and pydantic to serialize output. However, that is not the problem.
What I want to know is how can I set the translated fields to the mapped classes when querying the database?
Meaning, that the instantiated objects for model.Category and model.Product have the name and the name, description fields filled, respectively.
As of now I am doing this query:
select(model_cls).join(translation_cls, (model_cls.id == translation_cls.original_id) & (translation_cls.language == requestedLanguage))
Where model_cls is one the main tables, and translation_cls is its respective translation table. For instance:
select(model.Category).join(model.CategoryTranslation, (model.Category.id == model.CategoryTranslation.original_id) & (model.CategoryTranslation.language == requestedLanguage))
Think that when requesting a product, we may need to join and set attributes for product and for its related category. The response may need to look like this:
{
"id": 1,
"name": "TranslatedProductName",
"category": {
"id": 1,
"name": "TranslatedCategoryNme",
"description": "TranslatedCategoryDescription"
}
}
Hope I've explained myself. If anyone needs more info or explaining, please comment
I want to add a column that is autoincrement that is not primary key to an existing MySQL database.
The command issued on the server required for this operation is the following:
ALTER TABLE `mytable` ADD `id` INT UNIQUE NOT NULL AUTO_INCREMENT FIRST
The issue I face is how to replicate this table change through an Alembic migration. I have tried:
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('mytable', sa.Colummn('id', sa.INTEGER(),
nullable=False, autoincrement=True)
but when I try to insert a row with the following command:
INSERT INTO `mytable` (`col1`, `col2`) VALUES (`bar`);
where col1, col2 are non nullable columns. When I insert this record I expect the table to generate automatically the id for me.
ERROR 1364 (HY000): Field 'id' doesn't have a default value
If I inspect the sql autogenerated by Alembic using the following command:
alembic upgrade 'hash-of-revision' --sql
it spits out, for the given revision:
ALTER TABLE mytable ADD COLUMN id INTEGER NOT NULL;
Which means that either Alembic or SQLAlchemy is ignoring the autoincrement field when generating the sql of the migration.
Is there a way I can solve this? Or can I establish a migration based on a custom sql command?
First we add the column and allow it to have null values
op.add_column('abc', Column('id', BIGINT(unsigned=True), comment='This column stores the type of phrase') )
Then we create the primary key
op.create_primary_key( 'abc_pk', table_name='abc', columns=['id'] )
Not sure how it allows me to add a primary key on an empty column but i guess it is because it is in a transaction block.
Then we alter the column to have the autoincrement column
op.alter_column( existing_type=BIGINT(unsigned=True), table_name='abc', column_name='id', autoincrement=True, existing_autoincrement=True, nullable=False)
As of SQLAlchemy 1.4+ and Alembic 1.9 you can use the Identity type, which according to docs, supersedes the Serial type.
This Declarative ORM:
class ProductOption(Base):
__tablename__:str = 'product_options'
id:Mapped[int] = mapped_column(Integer, server_default=Identity(start=1, cycle=True), primary_key=True)
uuid:Mapped[UUID] = mapped_column(UUID, nullable=False, unique=True)
name:Mapped[str] = mapped_column(String(50), nullable=False)
price:Mapped[Decimal] = mapped_column(Numeric(16, 4), nullable=False)
cost:Mapped[Decimal] = mapped_column(Numeric(16, 4), nullable=False)
unit:Mapped[str] = mapped_column(String(8), nullable=False)
duration:Mapped[int] = mapped_column(Integer)
Results in the following Alebic --autogenerate migration:
op.create_table(
"product_options",
sa.Column(
"id",
sa.Integer(),
sa.Identity(always=False, start=1, cycle=True),
nullable=False,
),
sa.Column("uuid", sa.UUID(), nullable=False),
sa.Column("name", sa.String(length=50), nullable=False),
sa.Column("price", sa.Numeric(precision=16, scale=4), nullable=False),
sa.Column("cost", sa.Numeric(precision=16, scale=4), nullable=False),
sa.Column("unit", sa.String(length=8), nullable=False),
sa.Column("duration", sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("uuid"),
)
In SqlAlchemy ORM and SQLite how make a table definition, like raw UNIQUE(fld) ON CONFLICT REPLACE?
This should be analogy of raw query, like:
CREATE TABLE tbl (fld TEXT UNIQUE, UNIQUE(fld) ON CONFLICT IGNORE)
There is INSERT…ON DUPLICATE KEY UPDATE for MySQL, and it is not the definition.
ON CONFLICT in SQLite constraints has been supported since SQLAlchemy 1.3.
A constraint can be applied inline in a column description
tbl1 = sa.Table(
't1',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(
'name', sa.String, unique=True, sqlite_on_conflict_unique='REPLACE'
),
sa.Column('colour', sa.String),
)
generating this DDL
CREATE TABLE t1 (
id INTEGER NOT NULL,
name VARCHAR,
colour VARCHAR,
PRIMARY KEY (id),
UNIQUE (name) ON CONFLICT REPLACE
)
Or as a separate constraint:
tbl2 = sa.Table(
't2',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String),
sa.Column('colour', sa.String),
sa.Column('size', sa.Integer),
sa.UniqueConstraint('name', 'colour', sqlite_on_conflict='IGNORE')
)
generating
CREATE TABLE t2 (
id INTEGER NOT NULL,
name VARCHAR,
colour VARCHAR,
size INTEGER,
PRIMARY KEY (id),
UNIQUE (name, colour) ON CONFLICT IGNORE
)
I'm trying to create tables on-the-fly from existing data...however, the table I need has dual Primary Keys. I can't find how to satisfy the restrictions.
I.e. I start with the following two tables...
self.DDB_PAT_BASE = Table('DDB_PAT_BASE', METADATA,
Column('PATID', INTEGER(), primary_key=True),
Column('PATDB', INTEGER(), primary_key=True),
Column('FAMILYID', INTEGER()),
)
self.DDB_ERX_MEDICATION_BASE = Table('DDB_ERX_MEDICATION_BASE', METADATA,
Column('ErxID', INTEGER(), primary_key=True),
Column('ErxGuid', VARCHAR(length=36)),
Column('LastDownload', DATETIME()),
Column('LastUpload', DATETIME()),
Column('Source', INTEGER()),
)
When I try the following, it works...
t = Table('testtable', METADATA,
Column('ErxID', INTEGER(), ForeignKey('DDB_ERX_MEDICATION_BASE.ErxID')),
)
t.create()
However, both the following give me the error...
t = Table('testtable', METADATA,
Column('PATID', INTEGER(), ForeignKey('DDB_PAT_BASE.PATID')),
)
t.create()
t = Table('testtable', METADATA,
Column('PATID', INTEGER(), ForeignKey('DDB_PAT_BASE.PATID')),
Column('PATDB', INTEGER(), ForeignKey('DDB_PAT_BASE.PATDB')),
)
t.create()
sqlalchemy.exc.OperationalError: (pymssql.OperationalError) (1776, "There are no primary or candidate keys in the referenced table 'DDB_PAT_BASE' that match the referencing column list in the foreign key 'FK__testtabl__PATID__3FD3A585'.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\nDB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n") [SQL: '\nCREATE TABLE [testtable] (\n\t[PATID] INTEGER NULL, \n\tFOREIGN KEY([PATID]) REFERENCES [DDB_PAT_BASE] ([PATID])\n)\n\n']
The table you are pointing to has a composite primary key, not multiple primary keys. Hence. you need to create a composite foreign key, not two foreign keys pointing to each half of the composite primary key:
t = Table('testtable', METADATA,
Column('PATID', INTEGER()),
Column('PATDB', INTEGER()),
ForeignKeyConstraint(['PATID', 'PATDB'], ['DDB_PAT_BASE.PATID', 'DDB_PAT_BASE.PATDB']),
)
t.create()
The situation is a little bit simplified. I have two migration files for sqlalchemy-migrate:
In First I create table volume_usage_cache, then autoload it, create copy of its columns and print it:
from sqlalchemy import Column, DateTime
from sqlalchemy import Boolean, BigInteger, MetaData, Integer, String, Table
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# Create new table
volume_usage_cache = Table('volume_usage_cache', meta,
Column('deleted', Boolean(create_constraint=True, name=None)),
Column('id', Integer(), primary_key=True, nullable=False),
Column('curr_write_bytes', BigInteger(), default=0),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
volume_usage_cache.create()
volume_usage_cache = Table('volume_usage_cache', meta, autoload=True)
columns = []
[columns.append(column.copy()) for column in volume_usage_cache.columns]
print columns
And I get in log what I expected:
[Column('deleted', Boolean(), table=None), Column('id', Integer(), table=None,
primary_key=True, nullable=False), Column('curr_write_bytes', BigInteger(),
table=None, default=ColumnDefault(0))]
But if I make a copy of columns in Second migration file (that is runed after First):
from sqlalchemy import MetaData, String, Integer, Boolean, Table, Column, Index
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
table = Table("volume_usage_cache", meta, autoload=True)
columns = []
for column in table.columns:
columns.append(column.copy())
print columns
I get a different result:
[Column('deleted', INTEGER(), table=None, default=ColumnDefault(0)),
Column(u'id', INTEGER(), table=None, primary_key=True, nullable=False),
Column(u'curr_write_bytes', NullType(), table=None)]
Why curr_write_bytes column has NullType?
The are two problems:
First:
In First file we are using old metadata that already contains all columns with need types
So if we create new MetaData instance, SqlAlchemy will load info about table from database and will get the same result as in Second file.
Second:
There is no support in sqlAlchemy for BigInteger column type (in sqlite). And Sqlite doesn't support types of column at all. So we can create table with column BigInteger (and it will work), but after autoload type of such column will be automatically converted to NullType.