So I am trying to implement a simple one to many relation between these two tables. I've read the docs, scraped the net, put in a lot of work to solve this so I turn to you.
I have full access to the database and are able to create other tables, with similar relationships that do work.
I'm using mariadb, "mysql".
Every row in the table Tradable has an tick_size_id and every row has a tick_size_id. And I want to connect them with that column, I can't seem to figure out how.
base = declarative_base()
class Tradables(base):
__tablename__ = "tradables"
id = Column(Integer, primary_key=True)
tick_size_id = Column(Integer, nullable=False)
ticks = relationship("Ticks")
class Ticks(base):
__tablename__ = "ticks"
id = Column(Integer, primary_key=True)
tick_size_id = Column(Integer, ForeignKey("tradables.tick_size_id"))
def main():
engine = create_engine("mysql+pymysql://user:password#localhost/database?host=localhost?port=3306")
base.metadata.create_all(engine)
if __name__ == '__main__':
main()
This does not work.
and fails with:
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1005, 'Can\'t create table Trading.ticks (errno: 150 "Foreign key constraint is incorrectly formed")') [SQL: '\n
CREATE TABLE ticks (\n\t
id INTEGER NOT NULL AUTO_INCREMENT, \n\t
tick_size_id INTEGER, \n\t
PRIMARY KEY (id), \n\t
FOREIGN KEY(tick_size_id) REFERENCES tradables (tick_size_id)\n)\n\n']
What am I doing wrong?
Edit:
Tried both these creation orders and which both gives the same result.
base.metadata.tables["ticks"].create(bind=engine)
base.metadata.tables["tradables"].create(bind=engine)
and
base.metadata.tables["tradables"].create(bind=engine)
base.metadata.tables["ticks"].create(bind=engine)
You need to have an index on tradables.tick_size_id. I'm not an alchemist, but I guess it would be something like
...
__tablename__ = "tradables"
id = Column(Integer, primary_key=True)
tick_size_id = Column(Integer, nullable=False, index=True)
ticks = relationship("Ticks")
...
Related
this is my error code.
AssertionError: Dependency rule tried to blank-out primary key column 'user_upload_file.group_code' on instance '<UserUploadFileDto at 0x7f9358515640>'
lower is my tables.
table 1
class ServiceCodeDto(db.Model):
__tablename__ = 'service_code'
code = db.Column(db.String(10, 'utf8mb4_unicode_ci'), nullable=False, primary_key=True)
name = db.Column(db.String(50, 'utf8mb4_unicode_ci'), nullable=False)
# connect tables
groups = db.relationship("GroupCodeDto", back_populates="services")
user_upload_file = db.relationship("UserUploadFileDto", back_populates="services")
table 2
class GroupCodeDto(db.Model):
__tablename__ = 'group_code'
service_code = db.Column(db.String(10, 'utf8mb4_unicode_ci'), db.ForeignKey('service_code.code'), nullable=False, primary_key=True)
code = db.Column(db.BigInteger, nullable=False, primary_key=True)
name = db.Column(db.String(50, 'utf8mb4_unicode_ci'), nullable=False)
# Connected tables
services = db.relationship("ServiceCodeDto", back_populates="groups")
user_upload_file = db.relationship("UserUploadFileDto", back_populates="groups")
table 3
class UserUploadFileDto(db.Model):
__tablename__ = 'user_upload_file'
email = db.Column(db.String(50, 'utf8mb4_unicode_ci'), nullable=False)
service_code = db.Column(db.String(10, 'utf8mb4_unicode_ci'), db.ForeignKey('service_code.code'), nullable=False, primary_key=True)
group_code = db.Column(db.BigInteger, db.ForeignKey('group_code.code'), nullable=False, primary_key=True)
# connect tables
services = db.relationship("ServiceCodeDto", back_populates="user_upload_file")
groups = db.relationship("GroupCodeDto", back_populates="user_upload_file")
now, i have data like this.
group_code
service_code
code
name
AA
100
first
AB
100
second
user_upload_file
email
service_code
group_code
aa#gmail.com
AA
100
ab#gmail.com
AA
100
I think, second row of group_code (AB-100) can be removed.
because, any child table (user_upload_file) don't have AB-100 !
but it doesn't work.
I see error like this.
AssertionError: Dependency rule tried to blank-out primary key column 'user_upload_file.group_code' on instance '<UserUploadFileDto at 0x7f9358515640>'
i guess, try to remove AB-100, but 100 is foreign key of user_upload_file so it's error.
but GroupCodeDto:code is not alone primary key!
i think UserUploadFileDto know just one foreign from GroupCodeDto is problem.
so I try to change UserUploadFileDto's service code to db.ForeignKey('group_code.service_code'), i can't find right way.... document just say 'use same column' case.
https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#handling-multiple-join-paths
it doesn't work for me.
so sqlalchemy can't know what is 'real' parent row. how can i set it?
I'm trying to create a db using sqlalchemist to connect with snowflake and alembic to migrations for an app created in FastAPI. I created some models and all works fine to create this one in snowflake for examples:
create or replace TABLE PRICE_SERVICE.FP7.LOCATION (
ID NUMBER(38,0) NOT NULL autoincrement,
CREATED_AT TIMESTAMP_NTZ(9),
UPDATED_AT TIMESTAMP_NTZ(9),
ADDRESS VARCHAR(16777216),
LATITUDE VARCHAR(16777216) NOT NULL,
LONGITUDE VARCHAR(16777216) NOT NULL,
unique (LATITUDE),
unique (LONGITUDE),
primary key (ID)
);
but when I try to create a new obj to this table and I'm getting:
sqlalchemy.orm.exc.FlushError: Instance <Location at 0x7fead79677c0> has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a load() event.
my model is:
class Location(Base):
id = Column(Integer, primary_key=True)
address = Column(String)
latitude = Column(String, unique=True, nullable=False)
longitude = Column(String, unique=True, nullable=False)
buildings = relationship("Building", back_populates="location")
quotes = relationship("Quote", back_populates="location")
binds = relationship("Bind", back_populates="location")
and I'm trying to do this:
def create_location(db: Session, data: Dict[str, Any]) -> Location:
location = Location(
address=data["address"], # type: ignore
latitude=data["lat"], # type: ignore
longitude=data["lng"], # type: ignore
)
db.add(location)
db.commit()
return location
also I tried using:
id = Column(Integer, Sequence("id_seq"), primary_key=True)
but I got:
sqlalchemy.exc.StatementError: (sqlalchemy.exc.ProgrammingError) (snowflake.connector.errors.ProgrammingError) 000904 (42000): SQL compilation error: error line 1 at position 7
backend_1 | invalid identifier 'ID_SEQ.NEXTVAL'
You forgot to define the Sequence in your model. When you define the Sequence value on table creation in Snowflake a Sequence is generated at the schema level.
from sqlalchemy import Column, Integer, Sequence
...
class Location(Base):
id = Column(Integer, Sequence("Location_Id"), primary_key=True,
autoincrement=True)
address = Column(String)
...
Make sure your user role has usage permission for that sequence and that should take care of your issue setting the next value for your primary key.
An approach that helps me with table primary keys is defining a mixin class that uses declared_attr to automatically define my primary keys based on the table name.
from sqlalchemy import Column, Integer, Sequence
from slqalchemy.ext.declarative import declared_attr
class SomeMixin(object):
#declared_attr
def record_id(cls):
"""
Use table name to define pk
""""
return Column(
f"{cls.__tablename__} Id",
Integer(),
primary_key=True,
autoincrement=True
)
Then you pass said mixin into your model
from sqlalchemy import Column, Integer, String, Sequence
from wherever import SomeMixin
class Location(Base, SomeMixin):
address = Column(String)
...
Now Location.record_id gets set through the sequence you defined in the mixin.
Hope this helped
So I am currently trying to build a new app using flask-restful for the backend, and I am still learning since all of this is quite new to me. I already set up everything with several different mySQL tables detailed next and all the relationships between them, and everything seems to be working fine, except than the insert process of new data is quite slow.
Here is a (simplified) explanation of the current setup I have. Basically, I first of all have one Flights table.
class FlightModel(db.Model):
__tablename__ = "Flights"
flightid = db.Column(db.Integer, primary_key = True, nullable=False)
[Other properties]
reviewid = db.Column(db.Integer, db.ForeignKey('Reviews.reviewid'), index = True, nullable = False)
review = db.relationship("ReviewModel", back_populates="flight", lazy='joined')
This table then points to a Reviews table, in which I store the global review left by the user.
class ReviewModel(db.Model):
__tablename__ = "Reviews"
reviewid = db.Column(db.Integer, primary_key = True, nullable = False)
[Other properties]
depAirportReviewid = db.Column(db.Integer, db.ForeignKey('DepAirportReviews.reviewid'), index=True, nullable = False)
arrAirportReviewid = db.Column(db.Integer, db.ForeignKey('ArrAirportReviews.reviewid'), index=True, nullable = False)
airlineReviewid = db.Column(db.Integer, db.ForeignKey('AirlineReviews.reviewid'), index=True, nullable = False)
flight = db.relationship("FlightModel", uselist=False, back_populates="review", lazy='joined')
depAirportReview = db.relationship("DepAirportReviewModel", back_populates="review", lazy='joined')
arrAirportReview = db.relationship("ArrAirportReviewModel", back_populates="review", lazy='joined')
airlineReview = db.relationship("AirlineReviewModel", back_populates="review", lazy='joined')
Then, a more detailed review can be left regarding different aspects of the flights, stored in yet another table (for example, in the following DepAirportReviews table: there are three tables in total at this level).
class DepAirportReviewModel(db.Model):
__tablename__ = "DepAirportReviews"
reviewid = db.Column(db.Integer, primary_key = True, nullable = False)
[Other properties]
review = db.relationship("ReviewModel", uselist=False, back_populates="depAirportReview", lazy='joined')
The insert process is slow (it typically takes 1 second per flight to insert, which is a problem when I try to bulk insert a few hundreds of them).
I understand this is because of all these relationships and all the trips to the database it implies, in order to retrieve the different ids for the different tables. Is it correct? Is there anything I could do to solve this, or will I need to redesign the tables to remove some of these relationships?
Thanks for any help!
EDIT: displaying the SQL executed directly showed what I expected: 7 simple queries in total are executed for each insertion, each taking ~300ms. It's quite long, I guess it is mostly due to the time to reach the server. Nothing to be done except remvoing some foreign keys, right?
I have a table that does not have a primary key. And I really do not want to apply this constraint to this table.
In SQLAlchemy, I defined the table class by:
class SomeTable(Base):
__table__ = Table('SomeTable', meta, autoload=True, autoload_with=engine)
When I try to query this table, I got:
ArgumentError: Mapper Mapper|SomeTable|SomeTable could not assemble any primary key columns for mapped table 'SomeTable'.
How to loss the constraint that every table must have a primary key?
There is only one way that I know of to circumvent the primary key constraint in SQL Alchemy - it's to map specific column or columns to your table as a primary keys, even if they aren't primary key themselves.
http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#how-do-i-map-a-table-that-has-no-primary-key.
There is no proper solution for this but there are workarounds for it:
Workaround 1
Adding parameter primary_key to the existing column that is not having a primary key will work.
class SomeTable(Base):
__table__ = 'some_table'
some_other_already_existing_column = Column(..., primary_key=True) # just add primary key to it whether or not this column is having primary key or not
Workaround 2
Just declare a new dummy column on the ORM layer, not in actual DB. Just define in SQLalchemy model
class SomeTable(Base):
__table__ = 'some_table'
column_not_exist_in_db = Column(Integer, primary_key=True) # just add for sake of this error, dont add in db
Disclaimer: Oracle only
Oracle databases secretly store something called rowid to uniquely define each record in a table, even if the table doesn't have a primary key. I solved my lack of primary key problem (which I did not cause!) by constructing my ORM object like:
class MyTable(Base)
__tablename__ = 'stupid_poorly_designed_table'
rowid = Column(String, primary_key=True)
column_a = Column(String)
column_b = Column(String)
...
You can see what rowid actually looks like (it's a hex value I believe) by running
SELECT rowid FROM stupid_poorly_designed_table
GO
Here is an example using __mapper_args__ and a synthetic primary_key. Because the table is time-series oriented data, there is no need for a primary key. All rows can be unique addresses with a (timestamp, pair) tuple.
class Candle(Base):
__tablename__ = "ohlvc_candle"
__table_args__ = (
sa.UniqueConstraint('pair_id', 'timestamp'),
)
#: Start second of the candle
timestamp = sa.Column(sa.TIMESTAMP(timezone=True), nullable=False)
open = sa.Column(sa.Float, nullable=False)
close = sa.Column(sa.Float, nullable=False)
high = sa.Column(sa.Float, nullable=False)
low = sa.Column(sa.Float, nullable=False)
volume = sa.Column(sa.Float, nullable=False)
pair_id = sa.Column(sa.ForeignKey("pair.id"), nullable=False)
pair = orm.relationship(Pair,
backref=orm.backref("candles",
lazy="dynamic",
cascade="all, delete-orphan",
single_parent=True, ), )
__mapper_args__ = {
"primary_key": [pair_id, timestamp]
}
MSSQL Tested
I know this thread is ancient but I spent way too long getting this to work to not share it :)
from sqlalchemy import Table, event
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import Column
from sqlalchemy import Integer
class RowID(Column):
pass
#compiles(RowID)
def compile_mycolumn(element, compiler, **kw):
return "row_number() OVER (ORDER BY (SELECT NULL))"
#event.listens_for(Table, "after_parent_attach")
def after_parent_attach(target, parent):
if not target.primary_key:
# if no pkey create our own one based on returned rowid
# this is untested for writing stuff - likely wont work
logging.info("No pkey defined for table, using rownumber %s", target)
target.append_column(RowID('row_id', Integer, primary_key=True))
https://docs-sqlalchemy-org.translate.goog/en/14/faq/ormconfiguration.html?_x_tr_sl=auto&_x_tr_tl=ru&_x_tr_hl=ru#how-do-i-map-a-table-that-has-no-primary-key
One way from there:
In SQLAlchemy ORM, to map to a specific table, there must be at least one column designated as the primary key column; multi-column composite primary keys are of course also perfectly possible. These columns do not need to be known to the database as primary key columns. The columns only need to behave like a primary key, such as a non-nullable unique identifier for a row.
my code:
from ..meta import Base, Column, Integer, Date
class ActiveMinutesByDate(Base):
__tablename__ = "user_computer_active_minutes_by_date"
user_computer_id = Column(Integer(), nullable=False, primary_key=True)
user_computer_date_check = Column(Date(), default=None, primary_key=True)
user_computer_active_minutes = Column(Integer(), nullable=True)
The solution I found is to add an auto-incrementing primary key column to the table, then use that as your primary key. The database should deal with everything else beyond that.
So I'm quite new to SQLAlchemy.
I have a model Showing which has about 10,000 rows in the table. Here is the class:
class Showing(Base):
__tablename__ = "showings"
id = Column(Integer, primary_key=True)
time = Column(DateTime)
link = Column(String)
film_id = Column(Integer, ForeignKey('films.id'))
cinema_id = Column(Integer, ForeignKey('cinemas.id'))
def __eq__(self, other):
if self.time == other.time and self.cinema == other.cinema and self.film == other.film:
return True
else:
return False
Could anyone give me some guidance on the fastest way to insert a new showing if it doesn't exist already. I think it is slightly more complicated because a showing is only unique if the time, cinmea, and film are unique on a showing.
I currently have this code:
def AddShowings(self, showing_times, cinema, film):
all_showings = self.session.query(Showing).options(joinedload(Showing.cinema), joinedload(Showing.film)).all()
for showing_time in showing_times:
tmp_showing = Showing(time=showing_time[0], film=film, cinema=cinema, link=showing_time[1])
if tmp_showing not in all_showings:
self.session.add(tmp_showing)
self.session.commit()
all_showings.append(tmp_showing)
which works, but seems to be very slow. Any help is much appreciated.
If any such object is unique based on a combination of columns, you need to mark these as a composite primary key. Add the primary_key=True keyword parameter to each of these columns, dropping your id column altogether:
class Showing(Base):
__tablename__ = "showings"
time = Column(DateTime, primary_key=True)
link = Column(String)
film_id = Column(Integer, ForeignKey('films.id'), primary_key=True)
cinema_id = Column(Integer, ForeignKey('cinemas.id'), primary_key=True)
That way your database can handle these rows more efficiently (no need for an incrementing column), and SQLAlchemy now automatically knows if two instances of Showing are the same thing.
I believe you can then just merge your new Showing back into the session:
def AddShowings(self, showing_times, cinema, film):
for showing_time in showing_times:
self.session.merge(
Showing(time=showing_time[0], link=showing_time[1],
film=film, cinema=cinema)
)