SQL Alchemy relationships - python

I've run into another problem with SQLAlchemy. I have a relationship that is suppose to cascade delete some data from my model declared like so :
parentProject = relationship(Project, backref=backref("OPERATIONS", cascade="all,delete"))
This works fine as long as the data is from the current session. But if I start a session , add some data then close. Start another session and try to delete data from the previous one , the cascade doesnt work. The initializer of the database is as follows:
if isDBEmpty:
LOGGER.info("Initializing Database")
session = dao.Session()
model.Base.metadata.create_all(dao.Engine)
session.commit()
LOGGER.info("Database Default Tables created successfully!")
dao.storeEntity(model.User(administrator_username, md5(administrator_password).hexdigest(), administrator_email, True, model.ROLE_ADMINISTRATOR))
LOGGER.info("Database Default Generic Values were stored!")
else:
LOGGER.info("Database already has some data, will not be re-created at this startup!")
I'm guessing I'm missing something very basic here. Some help would be very appreciated.
Regards,
Bogdan

Related

SQLAlchemy/Postgres: Intermittent Error Serializing Object After Commit

I have a Flask application that uses SQLAlchemy (with some Marshmallow for serialization and deserialization).
I'm currently encountering some intermittent issues when trying to dump an object post-commit.
To give an example, let's say I have implemented a (multi-tenant) system for tracking system faults of some sort. This information is contained in a fault table:
class Fault(Base):
__tablename__ = "fault"
fault_id = Column(BIGINT, primary_key=True)
workspace_id = Column(Integer, ForeignKey('workspace.workspace_id'))
local_fault_id = Column(Integer)
name = Column(String)
description = Column(String)
I've removed a number of columns in the interest of simplicity, but this is the core of the model. The columns should be largely self explanatory, with workspace_id effectively representing tenant, and local_fault_id representing a tenant-specific fault sequence number, which is handled via a separate fault_sequence table.
This fault_sequence table holds a counter against workspace, and is updated by means of a simple on_fault_created() function that is executed by a trigger:
CREATE TRIGGER fault_created
AFTER INSERT
ON "fault"
FOR EACH ROW
EXECUTE PROCEDURE on_fault_created();
So - the problem:
I have a Flask endpoint for fault creation, where we create an instance of a Fault entity, add this via a scoped session (session.add(fault)), then call session.commit().
It seems that this is always successful in creating the desired entities in the database, executing the sequence update trigger etc. However, when I then try to interrogate the fault object for updated fields (after commit()), around 10% of the time I find that each key/field just points to an Exception:
psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block
Which seems to boil down to the following:
(psycopg2.errors.InvalidTextRepresentation) invalid input syntax for integer: ""
[SQL: SELECT fault.fault_id AS fault_fault_id, fault.workspace_id AS fault_workspace_id, fault.local_fault_id AS fault_local_fault_id, fault.name as fault_name, fault.description as fault_description
FROM fault
WHERE fault.fault_id = %(param_1)s]
[parameters: {'param_1': 166}]
(Background on this error at: http://sqlalche.me/e/13/2j8
My question, then, is what do we think could be causing this?
I think it smells like a race condition, with the update trigger not being complete before SQLAlchemy has tried to get the updated data; perhaps local_fault_id is null, and this is resulting in the invalid input syntax error.
That said, I have very low confidence on this. Any guidance here would be amazing, as I could really do with retrieving that sequence number that's incremented/handled by the update trigger.
Thanks
Edit 1:
Some more info:
I have tried removing the update trigger, in the hope of eliminating that as a suspect. This behaviour is still intermittently evident, so I don't think it's related to that.
I have tried adopting usage of flush and refresh before the commit, and this allows me to get the values that I need - though commit still appears to 'break' the fault object.
Edit 2:
So it really seems to be more postgres than anything else. When I interrogate my database logs, this is the weirdest thing. I can copy and paste the command it says is failing, and I struggle to see how this integer value in the WHERE clause is possibly evaluating to an empty string.
This same error is reproducible with SELECT ... FROM fault WHERE fault.fault_id = '', which in no way seems to be the query making to the DB.
I am stumped.
Your sentence "This same error is reproducible with SELECT ... FROM fault WHERE fault.fault_id = '', which in no way seems to be the query making to the DB." seems to indicate that you are trying to access an object that does not have the database primary key "fault_id".
I guess, given that you did not provide the code, that you are adding the object to your session (session.add), committing (session.commit) and then using the object. As fault_id is autogenerated by the database, the fault object in the session (in memory) does not have fault_id.
I believe you can correct this with:
session.add(fault)
session.commit()
session.refresh(fault)
The refresh needs to be AFTER commit to refresh the fault object and retrieve fault_id.
If you are using async, you need
session.add(fault)
await session.commit()
await session.refresh(fault)

How to run a script when ever there is a new entry in database using SQLalchemy in python?

I am new to SQL Alchemy and need a way to run a script whenever a new entry is added to a table. I am currently using the following method to get the task done but I am sure there has to be a more efficient way.
I am using python 2 for my project and MS SQL as database.
Suppose my table is carData and I add a new row for car details from website. The new car data is added to carData. My code works as follows
class CarData:
<fields for table class>
with session_scope() as session:
car_data = session.query(CarData)
reference_df = pd.read_sql_query(car_data.statement, car_data.session.bind)
while True:
with session_scope() as session:
new_df = pd.read_sql_query(car_data.statement, car_data.session.bind)
if len(new_df) > len(reference_df):
print "New Car details added"
<code to get the id of new row added>
<run script>
reference_df = new_df
sleep(10)
The above is ofcourse a much simpler version of the code that I am using but the idea is to have a reference point then keep checking every 10 seconds if there is a new entry. However even after using session_scope() I have seen connection issues after a few days as this script is suppose to run indefinitely.
Is there a better way to know that a new row has been added, get the id of the new row and run the required script?
I believe the error you've described is a connectivity issue with the database e.g. a temporary network problem
OperationalError: TCP Provider: Error code 0x68
So what you need to do is cater this with error handling!
try:
new_df = pd.read_sql_query(car_data.statement, car_data.session.bind)
except:
print("Problem with query, will try again shortly")

sqlalchemy connection flask

I have the following code in flask
sql = text('select * from person')
results = self.db.engine.execute(sql)
for row in results:
print(".............", row) # prints nothing
people = Person.query.all() # shows all person data
Now given this situation, it's obvious, the self.db is not using the same connection somehow that Person.query is using. However, given this situation, can I get the connection somehow from Person.query object?
PS. This is for testing and I'm using SQLite3 database. I tried this in postgres, but outcome is the same.
Just figured out. Try Person.query.session.execute(sql). Voila!

Python SQLAlchemy relation does not exist

I would like update my table through Python using SQLAlchemy. Since the table I would like to update is not in the default schema, I referred to this question to set the session by sess.execute("SET search_path TO client1").
The whole code example is shown as follows:
session = DBSession()
session.execute("SET search_path TO client1")
session.commit()
total_rows = session.query(table).all()
for row in total_rows:
try:
row.attr1 = getAttr1()
row.attr2 = getAttr2()
session.commit()
except Exception as inst:
print(inst)
session.rollback()
Though my code can update the table at the beginning, after several hundreds of iterations (around 500 maybe?) it will throw the exception that the relation table does not exist. My current solution is to iterate my code several times with 500 records updated each time. But I think it is not a perfect solution to this problem and I am still looking forward to finding out the reason that cause this exception.

SQLAlchemy get every row the matches query and loop through them

I'm new to Python and SQLAlchemy. I've been playing about with retrieving things from the database, and it's worked every time, but im a little unsure what to do when the select statement will return multiple rows. I tried using some older code that worked before I started SQLAlchemy, but db is a SQLAlchemy object and doesn't have the execute() method.
application = Applications.query.filter_by(brochureID=brochure.id)
cur = db.execute(application)
entries = cur.fetchall()
and then in my HTML file
{% for entry in entries %}
var getEmail = {{entry.2|tojson|safe}}
emailArray.push(getEmail);
I looked in the SQLAlchemy documentation and I couldn't find a .first() equivalent to getting all the rows. Can anyone point me in the right direction? No doubt it's something very small.
Your query is correct, you just need to change the way you interact with the result. The method you are looking for is all().
application = Applications.query.filter_by(brochureID=brochure.id)
entries = application.all()
the Usual way to work with orm queries is through the Session class, somewhere you should have a
engine = sqlalchemy.create_engine("sqlite:///...")
Session = sqlalchemy.orm.sessionmaker(bind=engine)
I'm not familiar with flask, but it likely does some of this work for you.
With a Session factory, your application is instead
session = Session()
entries = session.query(Application) \
.filter_by(...) \
.all()

Categories

Resources