If I have a SQLAlchemy declarative model like below:
class Test(Model):
__tablename__ = 'tests'
id = Column(Integer, Sequence('test_id_seq'), primary_key=True)
...
Atest_id = Column(Integer, ForeignKey('Atests.id'), nullable=True)
Btest_id = Column(Integer, ForeignKey('Btests.id'), nullable=True)
Ctest_id = Column(Integer, ForeignKey('Ctests.id'), nullable=True)
Dtest_id = Column(Integer, ForeignKey('Dtests.id'), nullable=True)
Etest_id = Column(Integer, ForeignKey('Etests.id'), nullable=True)
...
date = Column(DateTime)
status = Column(String(20)) # pass, fail, needs_review
And I would like to ensure that only one of the *test_id foreign keys is present in a given row, how might I accomplish that in SQLAlchemy?
I see that there is an SQLAlchemy CheckConstraint object (see docs), but MySQL does not support check constraints.
The data model has interaction outside of SQLAlchemy, so preferably it would be a database-level check (MySQL)
Well, considering your requisites "The data model has interaction outside of SQLAlchemy, so preferably it would be a database-level check (MySQL)" and 'ensure that only one [..] is not null'. I think the best approach is to write a trigger like this:
DELIMITER $$
CREATE TRIGGER check_null_insert BEFORE INSERT
ON my_table
FOR EACH ROW BEGIN
IF CHAR_LENGTH(CONCAT_WS('', NEW.a-NEW.a, NEW.b-NEW.b, NEW.c-NEW.c)) = 1 THEN
UPDATE `Error: Only one value of *test_id must be not null` SET z=0;
END IF;
END$$
DELIMITER ;
Some tricks and considerations:
IF STATEMENT: In order to avoid the tedious writing of check each column is not null while others are null, I did this trick: Reduce each column to one character and check how many characters exist. Note that NEW.a-NEW.a always returns 1 character if NEW.a is an Integer, NULL returns 0 characters and the operation NULL-NULL returns NULL on MySQL.
ERROR TRIGGERING: I suppose you want to raise an error, so how to do this on MySQL? You didn't mention the MySQL version. Only on MySQL 5.5 you can use the SIGNAL syntax to throw an exception. So the more portable way is issuing an invalid statement like: UPDATE xx SET z=0. If you are using MySQL 5.5 you could use: signal sqlstate '45000' set message_text = 'Error: Only one value of *test_id must be not null'; instead of UPDATE `Error: Only one value of *test_id must be not null` SET z=0;
Also, I think you want to check this on updates too, so use:
DELIMITER $$
CREATE TRIGGER check_null_update BEFORE UPDATE
ON my_table
FOR EACH ROW BEGIN
IF CHAR_LENGTH(CONCAT_WS('', NEW.a-NEW.a, NEW.b-NEW.b, NEW.c-NEW.c)) = 1 THEN
UPDATE `Error: Only one value of *test_id must be not null` SET z=0;
END IF;
END$$
DELIMITER ;
Or create a stored procedure and call it.
Update
For databases that supports check constraints, the code is more simple, see this example for SQL Server:
CREATE TABLE MyTable (col1 INT NULL, col2 INT NULL, col3 INT NULL);
GO
ALTER TABLE MyTable
ADD CONSTRAINT CheckOnlyOneColumnIsNull
CHECK (
LEN(CONCAT(col1-col1, col2-col2, col3-col3)) = 1
)
GO
Related
I am aware of a similar issue How to fix error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint? but the answers there did not fix my error
I have the following sqlalchemy structure connected to a postgres database
class Injury(db.Model):
__tablename__ = "injury"
id = Column(Integer, primary_key=True)
name = Column(String)
description = Column(String)
The DDL looks like
create table injury
(
id bigint not null
constraint idx_182837_injury_pkey
primary key
constraint injury_id_key
unique,
name text,
description text,
);
However, upon trying to insert something into the database like the following
injury = Injury(name='name', description='desc')
session.add(injury)
session.commit()
The following error occurs
Error '(psycopg2.errors.NotNullViolation) null value in column "id" of relation "injury" violates not-null constraint
Upon opening up my debugger, I can verify that the id attribute in the injury object before I commit is None
Why is this occurring? Shouldn't primary_key=True in my schema tell postgres that it is responsible for making the id? I tried playing around with SEQUENCE and IDENTITY but I was getting the same error
you have to tell it to auto increment
dont use bigint but serial
id SERIAL PRIMARY KEY
check this how-to-define-an-auto-increment-primary-key-in-postgresql-using-python
This was a result of some form of bug during my transfer from sqlite to postgres when I used pg_loader. I found out someone else encountered this and I followed their instructions and it corrected my issue. The following steps were all done in IntelliJ's database tool
Exported all of my data out with pg_dump
Reformatted my id schema to look like id = Column(Integer, Identity(start=2116), primary_key=True) where 2116 is one more than the last id i currently have in my database
Reloaded data base in with pg_loader
This was working without the Identity field but the key was set to 1 instead of 2116. Looking at this article helped me realize I needed to use the Identity field.
I'd like to use the following raw SQL to create an index in PostgreSQL:
CREATE INDEX ix_action_date ON events_map ((action ->> 'action'), date, map_id);
I tried to put this line into the model class's __table_args__ part, but I couldn't. Then I simply solved it by using raw SQL in Alembic migration.
conn = op.get_bind()
conn.execute(text("CREATE INDEX ..."))
and just using a dummy index in __table_args__ like:
Index('ix_action_date')
My only problem is that Alembic doesn't accept the dummy index with the same name, and every time I run a revision --autogenerate, it tells me the following:
SAWarning: Skipped unsupported reflection of expression-based index ix_action_date
% idx_name)
and then it adds the autogenerated index to the migration file:
op.create_index('ix_action_date', 'events_map', [], unique=False)
My question is:
How can I write raw SQL into a __table_args__ Index?
How can I really make my dummy index concept work ? I mean an index which is only compared by name?
How can I write raw SQL into a __table_args__ Index?
To specify formula indexes, you have to provide a text element for the expression
example:
class EventsMap(Base):
__tablename__ = 'events_map'
__table_args__ = (Index('ix_action_date', text("(action->>'action'), date, map_id")),)
map_id = Column(Integer, primary_key=True)
date = Column(DateTime)
action = Column(JSONB)
How can I really make my dummy index concept work ? I mean an index which is only compared by name?
It seems unnecessary to make your dummy index concept work. Either specify the full index expression in the __table_args__ as I've shown above, or omit it completely from the model & delegate index creation as a database migration handled by sqlalchemy.
I'm trying to establish a table with a layout approximately like the following:
class Widget(db.Model):
__tablename__ = 'widgets'
ext_id = db.Column(db.Integer, primary_key=True)
w_code = db.Column(db.String(34), unique=True)
# other traits follow...
All field values are provided through an external system, and new Widgets are discovered and some of the omitted trait values may change over time (very gradually) but the ext_id and w_code are guaranteed to be unique. Given the nature of the values for ext_id it behaves ideally as a primary key.
However when I create a new record, specifying the ext_id value, the value is not used in storage. Instead the values in ext_id follow an auto-increment behavior.
>>> # from a clean database
>>> skill = Widget(ext_id=7723, w_code=u'IGF35ac9')
>>> session.add(skill)
>>> session.commit()
>>> Skill.query.first().ext_id
1
>>>
How can I specify to SQLAlchemy that the ext_id field should be used as the primary key field without auto-increment?
Note:
I could add an extra synthetic id column as the primary key and make ext_id be a unique column instead but this both complicates my code and adds a (minimal) extra bloat to the database and all I/O to it. I'm hoping to avoid that.
Issue originated from a larger project but I was able to create a smaller repro.
Testing with sqlite
Set autoincrement=False to disable creating a sequence or serial for the primary key.
ext_id = db.Column(db.Integer, primary_key=True, autoincrement=False)
I am accessing Postgre database using SQLAlchemy models. In one of models I have Column with UUID type.
id = Column(UUID(as_uuid=True), default=uuid.uuid4(), nullable=False, unique=True)
and it works when I try to insert new row (generates new id).
Problem is when I try to fetch Person by id I try like
person = session.query(Person).filter(Person.id.like(some_id)).first()
some_id is string received from client
but then I get error LIKE (Programming Error) operator does not exist: uuid ~~ unknown.
How to fetch/compare UUID column in database through SQLAlchemy ?
don't use like, use =, not == (in ISO-standard SQL, = means equality).
Keep in mind that UUID's are stored in PostgreSQL as binary types, not as text strings, so LIKE makes no sense. You could probably do uuid::text LIKE ? but it would perform very poorly over large sets because you are effectively ensuring that indexes can't be used.
But = works, and is far preferable:
mydb=>select 'd796d940-687f-11e3-bbb6-88ae1de492b9'::uuid = 'd796d940-687f-11e3-bbb6-88ae1de492b9';
?column?
----------
t
(1 row)
I have 0 experience with postgresql and am deploying an app written in python using sqlalchemy to a server with postgres.
For development, I used an sqlite server.
Things are going pretty smoothly, but I hit a bump I don't know how to resolve.
I have three tables that look like that
class Car(db.Model):
id= db.Column(db.Integer, primary_key=True)
...
class Truck(db.Model):
id= db.Column(db.String(32), primary_key=True)
...
class Vehicles(db.Model):
id= db.Column(db.Integer, primary_key=True)
type= db.Column(db.String) #This is either 'car' or 'truck'
value= db.Column(db.String) #That's the car or truck id
...
I have a query that selects from Vehicles where type = 'car' AND value = 10
This is throwing an error:
sqlalchemy.exc.ProgrammingError: (ProgrammingError) operator does not exist: integer = character varying
So I guess this is because Car.id is an int and Vehicle.value is a string..
How to write this query in sqlalchemy? Is there a way to write it and make it compatible with my sqlite dev environment and the pgsql production?
currently it looks like that
db.session.query(Vehicle).filter(Car.id == Vehicle.value)
PS: The truck id has to be a string and the car id has to be an int. I don't have control over that.
Simply cast to a string:
db.session.query(Vehicle).filter(str(Car.id) == Vehicle.value)
if Car.id is a local variable that is an int.
If you need to use this in a join, have the database cast it to a string:
from sqlalchemy.sql.expression import cast
db.session.query(Vehicle).filter(cast(Car.id, sqlalchemy.String) == Vehicle.value)
If the string value in the other column contains digits and possibly whitespace you may have to consider trimming, or instead casting the string value to an integer (and leave the integer column an integer).