I want to add a column that is autoincrement that is not primary key to an existing MySQL database.
The command issued on the server required for this operation is the following:
ALTER TABLE `mytable` ADD `id` INT UNIQUE NOT NULL AUTO_INCREMENT FIRST
The issue I face is how to replicate this table change through an Alembic migration. I have tried:
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('mytable', sa.Colummn('id', sa.INTEGER(),
nullable=False, autoincrement=True)
but when I try to insert a row with the following command:
INSERT INTO `mytable` (`col1`, `col2`) VALUES (`bar`);
where col1, col2 are non nullable columns. When I insert this record I expect the table to generate automatically the id for me.
ERROR 1364 (HY000): Field 'id' doesn't have a default value
If I inspect the sql autogenerated by Alembic using the following command:
alembic upgrade 'hash-of-revision' --sql
it spits out, for the given revision:
ALTER TABLE mytable ADD COLUMN id INTEGER NOT NULL;
Which means that either Alembic or SQLAlchemy is ignoring the autoincrement field when generating the sql of the migration.
Is there a way I can solve this? Or can I establish a migration based on a custom sql command?
First we add the column and allow it to have null values
op.add_column('abc', Column('id', BIGINT(unsigned=True), comment='This column stores the type of phrase') )
Then we create the primary key
op.create_primary_key( 'abc_pk', table_name='abc', columns=['id'] )
Not sure how it allows me to add a primary key on an empty column but i guess it is because it is in a transaction block.
Then we alter the column to have the autoincrement column
op.alter_column( existing_type=BIGINT(unsigned=True), table_name='abc', column_name='id', autoincrement=True, existing_autoincrement=True, nullable=False)
As of SQLAlchemy 1.4+ and Alembic 1.9 you can use the Identity type, which according to docs, supersedes the Serial type.
This Declarative ORM:
class ProductOption(Base):
__tablename__:str = 'product_options'
id:Mapped[int] = mapped_column(Integer, server_default=Identity(start=1, cycle=True), primary_key=True)
uuid:Mapped[UUID] = mapped_column(UUID, nullable=False, unique=True)
name:Mapped[str] = mapped_column(String(50), nullable=False)
price:Mapped[Decimal] = mapped_column(Numeric(16, 4), nullable=False)
cost:Mapped[Decimal] = mapped_column(Numeric(16, 4), nullable=False)
unit:Mapped[str] = mapped_column(String(8), nullable=False)
duration:Mapped[int] = mapped_column(Integer)
Results in the following Alebic --autogenerate migration:
op.create_table(
"product_options",
sa.Column(
"id",
sa.Integer(),
sa.Identity(always=False, start=1, cycle=True),
nullable=False,
),
sa.Column("uuid", sa.UUID(), nullable=False),
sa.Column("name", sa.String(length=50), nullable=False),
sa.Column("price", sa.Numeric(precision=16, scale=4), nullable=False),
sa.Column("cost", sa.Numeric(precision=16, scale=4), nullable=False),
sa.Column("unit", sa.String(length=8), nullable=False),
sa.Column("duration", sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("uuid"),
)
Related
I'm trying to create a db using sqlalchemist to connect with snowflake and alembic to migrations for an app created in FastAPI. I created some models and all works fine to create this one in snowflake for examples:
create or replace TABLE PRICE_SERVICE.FP7.LOCATION (
ID NUMBER(38,0) NOT NULL autoincrement,
CREATED_AT TIMESTAMP_NTZ(9),
UPDATED_AT TIMESTAMP_NTZ(9),
ADDRESS VARCHAR(16777216),
LATITUDE VARCHAR(16777216) NOT NULL,
LONGITUDE VARCHAR(16777216) NOT NULL,
unique (LATITUDE),
unique (LONGITUDE),
primary key (ID)
);
but when I try to create a new obj to this table and I'm getting:
sqlalchemy.orm.exc.FlushError: Instance <Location at 0x7fead79677c0> has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a load() event.
my model is:
class Location(Base):
id = Column(Integer, primary_key=True)
address = Column(String)
latitude = Column(String, unique=True, nullable=False)
longitude = Column(String, unique=True, nullable=False)
buildings = relationship("Building", back_populates="location")
quotes = relationship("Quote", back_populates="location")
binds = relationship("Bind", back_populates="location")
and I'm trying to do this:
def create_location(db: Session, data: Dict[str, Any]) -> Location:
location = Location(
address=data["address"], # type: ignore
latitude=data["lat"], # type: ignore
longitude=data["lng"], # type: ignore
)
db.add(location)
db.commit()
return location
also I tried using:
id = Column(Integer, Sequence("id_seq"), primary_key=True)
but I got:
sqlalchemy.exc.StatementError: (sqlalchemy.exc.ProgrammingError) (snowflake.connector.errors.ProgrammingError) 000904 (42000): SQL compilation error: error line 1 at position 7
backend_1 | invalid identifier 'ID_SEQ.NEXTVAL'
You forgot to define the Sequence in your model. When you define the Sequence value on table creation in Snowflake a Sequence is generated at the schema level.
from sqlalchemy import Column, Integer, Sequence
...
class Location(Base):
id = Column(Integer, Sequence("Location_Id"), primary_key=True,
autoincrement=True)
address = Column(String)
...
Make sure your user role has usage permission for that sequence and that should take care of your issue setting the next value for your primary key.
An approach that helps me with table primary keys is defining a mixin class that uses declared_attr to automatically define my primary keys based on the table name.
from sqlalchemy import Column, Integer, Sequence
from slqalchemy.ext.declarative import declared_attr
class SomeMixin(object):
#declared_attr
def record_id(cls):
"""
Use table name to define pk
""""
return Column(
f"{cls.__tablename__} Id",
Integer(),
primary_key=True,
autoincrement=True
)
Then you pass said mixin into your model
from sqlalchemy import Column, Integer, String, Sequence
from wherever import SomeMixin
class Location(Base, SomeMixin):
address = Column(String)
...
Now Location.record_id gets set through the sequence you defined in the mixin.
Hope this helped
I am using flask with alembic and i have the two tables below linked by a Foreign key constraint:
table_one = Table("table_one", meta.Base.metadata,
Column("id", BigInteger, primary_key=True),
Column("filename", VARCHAR(65535)),
Column("mission", VARCHAR(65535)),
)
table_two = Table("table_two", meta.Base.metadata,
Column("id", BigInteger, primary_key=True),
Column("file_id", BigInteger, ForeignKey("table_one.id")),
Column("username", ArrowType(timezone=True)),
I am trying to get rid of table_one with the alembic revision below
def upgrade():
op.drop_table('table_one')
op.drop_constraint('table_two_id_fkey', 'table_two', type_='foreignkey')
op.drop_column('table_two', 'file_id')
op.drop_column('table_two', 'id')
def downgrade():
op.add_column('table_two', sa.Column('id', sa.BIGINT(), autoincrement=True, nullable=False))
op.add_column('table_two', sa.Column('file_id', sa.BIGINT(), autoincrement=False, nullable=True))
op.create_foreign_key('table_two_file_id_fkey', 'table_two', 'table_one', ['file_id'], ['id'])
op.create_table('table_one',
sa.Column('id', sa.BIGINT(), autoincrement=True, nullable=False),
sa.Column('filename', sa.VARCHAR(length=65535), autoincrement=False, nullable=True),
sa.Column('mission', sa.VARCHAR(length=65535), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id', name='table_one_pkey')
)
but unfortunately there seem to be an issue with the cascade and i am facing the error below:
psycopg2.errors.DependentObjectsStillExist: cannot drop table table_one because other objects depend on it
DETAIL: constraint table_two_file_id_fkey on table table_tow depends on table table_one
HINT: Use DROP ... CASCADE to drop the dependent objects too.
Does anyone have an idea how on to solve this issue?
Incase anyone is trying to drop a foreign key as the question title says:
You can remove a foreign key using the drop_constraint() function in alembic
op.drop_constraint(constraint_name="FK_<target>_<source>", table_name="<source>")
total newbie to Alembic, SQLAlchemy, and Python. I've gotten to the point where Alembic is comparing existing objects in the database against the declarative classes I've made, and there's one pesky index (for a foreign key) that Alembic refuses to leave in-place in my initial migration.
I'm completely at a loss as to why the migration is continually trying to drop and re-create this index, which, if I leave in the migration I'll wager is going to fail anyway. Plus, if I don't reconcile the class to the database this will likely come up every time I auto-generate migrations.
Here's the pertinent part of what is in the upgrade method:
op.drop_index(
'vndr_prod_tp_cat_category_fk_idx',
table_name='vendor_product_types_magento_categories'
)
In the downgrade method:
op.create_index(
'vndr_prod_tp_cat_category_fk_idx',
'vendor_product_types_magento_categories',
['magento_category_id'],
unique=False
)
...here's the DDL for the table as it exists in MySQL:
CREATE TABLE `vendor_product_types_magento_categories` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`vendor_product_type_id` bigint(20) unsigned NOT NULL,
`magento_category_id` bigint(20) unsigned NOT NULL,
`sequence` tinyint(3) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `vendor_product_types_magento_categories_uq` (`vendor_product_type_id`,`magento_category_id`,`sequence`),
KEY `vndr_prod_tp_cat_category_fk_idx` (`magento_category_id`),
CONSTRAINT `vndr_prod_tp_cat_magento_category_fk` FOREIGN KEY (`magento_category_id`) REFERENCES `magento_categories` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `vndr_prod_tp_cat_product_type_fk` FOREIGN KEY (`vendor_product_type_id`) REFERENCES `vendor_product_types` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=101 DEFAULT CHARSET=utf8
...and here's the class I wrote:
from sqlalchemy import Column, Integer, UniqueConstraint, ForeignKeyConstraint, Index
from sqlalchemy.dialects.mysql import TIMESTAMP
from sqlalchemy.sql import text
from .base import Base
class VendorProductTypesMagentoCategories(Base):
__tablename__ = 'vendor_product_types_magento_categories'
id = Column(Integer, primary_key=True)
vendor_product_type_id = Column(
Integer,
nullable=False
)
magento_category_id = Column(
Integer,
nullable=False
)
sequence = Column(Integer, nullable=False)
created_at = Column(TIMESTAMP, server_default=text('CURRENT_TIMESTAMP'), nullable=False)
updated_at = Column(
TIMESTAMP,
server_default=text('NULL ON UPDATE CURRENT_TIMESTAMP'),
nullable=True
)
__table_args__ = (
UniqueConstraint(
'vendor_product_type_id',
'magento_category_id',
'sequence',
name='vendor_product_types_magento_categories_uq'
),
ForeignKeyConstraint(
('vendor_product_type_id',),
('vendor_product_types.id',),
name='vndr_prod_tp_cat_product_type_fk'
),
ForeignKeyConstraint(
('magento_category_id',),
('magento_categories.id',),
name='vndr_prod_tp_cat_category_fk_idx'
),
)
def __repr__(self):
return '<VendorProductTypesMagentoCategories (id={}, vendor_name={}, product_type={})>'.format(
self.id,
self.vendor_name,
self.product_type
)
You define your product foreign key in your python code as
ForeignKeyConstraint(
('magento_category_id',),
('magento_categories.id',),
name='vndr_prod_tp_cat_category_fk_idx'
)
Here you use vndr_prod_tp_cat_category_fk_idx as the name of the foreign key constraint, not as the name of the underlying index, which explains why sqlalchemy wants to drop the index.
You should use vndr_prod_tp_cat_product_type_fk as the foreign key name and have a separate Index() construct with vndr_prod_tp_cat_category_fk_idx as name to create the index.
I'm trying to insert a row into a Postgresql table that looks like this:
CREATE TABLE launch_ids(
id SERIAL PRIMARY KEY,
launch_time TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT
(now() at time zone 'utc')
);
My class looks like this:
class LaunchId(Base):
"""Launch ID table for runs"""
__tablename__ = 'launch_ids'
id = Column(Integer, primary_key=True)
launch_time = Column(DateTime)
The launch_time should be managed by the database. I know it's possible to use default=datetime.datetime.utcnow(), but that uses the current time on the client. I know it's possible to use default=func.now(), but that means that if the database's definition of the default changes, then I need to change the default in two places.
Here is what I get when I try to insert a row in launch_ids without specifying a value:
l = LaunchId()
session.add(l)
session.commit()
IntegrityError: (psycopg2.IntegrityError) null value in column "launch_time" violates not-null constraint
DETAIL: Failing row contains (1, null).
[SQL: 'INSERT INTO launch_ids (launch_time) VALUES (%(launch_time)s) RETURNING launch_ids.id'] [parameters: {'launch_time': None}]
Use FetchedValue:
from sqlalchemy.schema import FetchedValue
class LaunchId(Base):
...
launch_time = Column(DateTime, FetchedValue())
Specify the server_default on the column like this:
class LaunchId(Base):
"""Launch ID table for runs"""
__tablename__ = 'launch_ids'
id = Column(Integer, primary_key=True)
launch_time = Column(DateTime, nullable=False
server_default=text("(now() at time zone 'utc')"))
Then adding a new launch_id through the session will work. server_default works differently from default in that it is generated on the server side. Official SQLAlchemy documentation: http://docs.sqlalchemy.org/en/latest/core/defaults.html#server-side-defaults
By specifying nullable=False, this model also becomes a true reflection of the CREATE TABLE you specified, and thus can be generated through Base.metadata.create_all or using alembic.
I have a table that does not have a primary key. And I really do not want to apply this constraint to this table.
In SQLAlchemy, I defined the table class by:
class SomeTable(Base):
__table__ = Table('SomeTable', meta, autoload=True, autoload_with=engine)
When I try to query this table, I got:
ArgumentError: Mapper Mapper|SomeTable|SomeTable could not assemble any primary key columns for mapped table 'SomeTable'.
How to loss the constraint that every table must have a primary key?
There is only one way that I know of to circumvent the primary key constraint in SQL Alchemy - it's to map specific column or columns to your table as a primary keys, even if they aren't primary key themselves.
http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#how-do-i-map-a-table-that-has-no-primary-key.
There is no proper solution for this but there are workarounds for it:
Workaround 1
Adding parameter primary_key to the existing column that is not having a primary key will work.
class SomeTable(Base):
__table__ = 'some_table'
some_other_already_existing_column = Column(..., primary_key=True) # just add primary key to it whether or not this column is having primary key or not
Workaround 2
Just declare a new dummy column on the ORM layer, not in actual DB. Just define in SQLalchemy model
class SomeTable(Base):
__table__ = 'some_table'
column_not_exist_in_db = Column(Integer, primary_key=True) # just add for sake of this error, dont add in db
Disclaimer: Oracle only
Oracle databases secretly store something called rowid to uniquely define each record in a table, even if the table doesn't have a primary key. I solved my lack of primary key problem (which I did not cause!) by constructing my ORM object like:
class MyTable(Base)
__tablename__ = 'stupid_poorly_designed_table'
rowid = Column(String, primary_key=True)
column_a = Column(String)
column_b = Column(String)
...
You can see what rowid actually looks like (it's a hex value I believe) by running
SELECT rowid FROM stupid_poorly_designed_table
GO
Here is an example using __mapper_args__ and a synthetic primary_key. Because the table is time-series oriented data, there is no need for a primary key. All rows can be unique addresses with a (timestamp, pair) tuple.
class Candle(Base):
__tablename__ = "ohlvc_candle"
__table_args__ = (
sa.UniqueConstraint('pair_id', 'timestamp'),
)
#: Start second of the candle
timestamp = sa.Column(sa.TIMESTAMP(timezone=True), nullable=False)
open = sa.Column(sa.Float, nullable=False)
close = sa.Column(sa.Float, nullable=False)
high = sa.Column(sa.Float, nullable=False)
low = sa.Column(sa.Float, nullable=False)
volume = sa.Column(sa.Float, nullable=False)
pair_id = sa.Column(sa.ForeignKey("pair.id"), nullable=False)
pair = orm.relationship(Pair,
backref=orm.backref("candles",
lazy="dynamic",
cascade="all, delete-orphan",
single_parent=True, ), )
__mapper_args__ = {
"primary_key": [pair_id, timestamp]
}
MSSQL Tested
I know this thread is ancient but I spent way too long getting this to work to not share it :)
from sqlalchemy import Table, event
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import Column
from sqlalchemy import Integer
class RowID(Column):
pass
#compiles(RowID)
def compile_mycolumn(element, compiler, **kw):
return "row_number() OVER (ORDER BY (SELECT NULL))"
#event.listens_for(Table, "after_parent_attach")
def after_parent_attach(target, parent):
if not target.primary_key:
# if no pkey create our own one based on returned rowid
# this is untested for writing stuff - likely wont work
logging.info("No pkey defined for table, using rownumber %s", target)
target.append_column(RowID('row_id', Integer, primary_key=True))
https://docs-sqlalchemy-org.translate.goog/en/14/faq/ormconfiguration.html?_x_tr_sl=auto&_x_tr_tl=ru&_x_tr_hl=ru#how-do-i-map-a-table-that-has-no-primary-key
One way from there:
In SQLAlchemy ORM, to map to a specific table, there must be at least one column designated as the primary key column; multi-column composite primary keys are of course also perfectly possible. These columns do not need to be known to the database as primary key columns. The columns only need to behave like a primary key, such as a non-nullable unique identifier for a row.
my code:
from ..meta import Base, Column, Integer, Date
class ActiveMinutesByDate(Base):
__tablename__ = "user_computer_active_minutes_by_date"
user_computer_id = Column(Integer(), nullable=False, primary_key=True)
user_computer_date_check = Column(Date(), default=None, primary_key=True)
user_computer_active_minutes = Column(Integer(), nullable=True)
The solution I found is to add an auto-incrementing primary key column to the table, then use that as your primary key. The database should deal with everything else beyond that.