I have a Event model, where external_id is set to be unique.
session1 = create_session()
session2 = create_session()
e1 = Event(external_id=1, headline='session1')
session1.add(e1)
e2 = Event(external_id=1, headline='session2')
session2.add(e2)
session1.commit()
session2.commit()
s = create_session()
e = s.query(Event).filter_by(external_id=1).first()
print e.headline
I am getting output "session1" with no errors, which means session2.commit failed, silently. Ultimately I would like to have choose do I want to overwrite what's in db or not. So if session2.commit() fails, I would like to choose wether to change insert to update for some cases. Anyone can help with this? Thanks.
EDITED:
I found the answer. The way to do it is through a two pass mechanism:
both session should add/commit row with minimal information (unique key only)
both sessions should do query and get the row for update
if we want one session to have priority, make sure we use lock for update for it
Related
Note: Using flask_sqlalchemy here
I'm working on adding versioning to multiple services on the same DB. To make sure it works, I'm adding unit tests that confirm I get an error (for this case my error should be StaleDataError). For other services in other languages, I pulled the same object twice from the DB, updated one instance, saved it, updated the other instance, then tried to save that as well.
However, because SQLAlchemy adds a fake-cache layer between the DB and the service, when I update the first object it automatically updates the other object I hold in memory. Does anyone have a way around this? I created a second session (that solution had worked in other languages) but SQLAlchemy knows not to hold the same object in two sessions.
I was able to manually test it by putting time.sleep() halfway through the test and manually changing data in the DB, but I'd like a way to test this using just the unit code.
Example code:
def test_optimistic_locking(self):
c = Customer(formal_name='John', id=1)
db.session.add(c)
db.session.flush()
cust = Customer.query.filter_by(id=1).first()
db.session.expire(cust)
same_cust = Customer.query.filter_by(id=1).first()
db.session.expire(same_cust)
same_cust.formal_name = 'Tim'
db.session.add(same_cust)
db.session.flush()
db.session.expire(same_cust)
cust.formal_name = 'Jon'
db.session.add(cust)
with self.assertRaises(StaleDataError): db.session.flush()
db.session.rollback()
It actually is possible, you need to create two separate sessions. See the unit test of SQLAlchemy itself for inspiration. Here's a code snippet of one of our unit tests written with pytest:
def test_article__versioning(connection, db_session: Session):
article = ProductSheetFactory(title="Old Title", version=1)
db_session.refresh(article)
assert article.version == 1
db_session2 = Session(bind=connection)
article2 = db_session2.query(ProductSheet).get(article.id)
assert article2.version == 1
article.title = "New Title"
article.version += 1
db_session.commit()
assert article.version == 2
with pytest.raises(sqlalchemy.orm.exc.StaleDataError):
article2.title = "Yet another title"
assert article2.version == 1
article2.version += 1
db_session2.commit()
Hope that helps. Note that we use "version_id_generator": False in the model, that's why we increment the version ourselves. See the docs for details.
For anyone that comes across this question, my current hypothesis is that it can't be done. SQLAlchemy is incredibly powerful and, given that the functionality is so good that we can't test this line, we should trust that it works as expected
Below is my current code. It connects successfully to the organization. How can I fetch the results of a query in Azure like they have here? I know this was solved but there isn't an explanation and there's quite a big gap on what they're doing.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
personal_access_token = 'xxx'
organization_url = 'zzz'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
wit_client = connection.clients.get_work_item_tracking_client()
results = wit_client.query_by_id("my query ID here")
P.S. Please don't link me to the github or documentation. I've looked at both extensively for days and it hasn't helped.
Edit: I've added the results line that successfully gets the query. However, it returns a WorkItemQueryResult class which is not exactly what is needed. I need a way to view the column and results of the query for that column.
So I've figured this out in probably the most inefficient way possible, but hope it helps someone else and they find a way to improve it.
The issue with the WorkItemQueryResult class stored in variable "result" is that it doesn't allow the contents of the work item to be shown.
So the goal is to be able to use the get_work_item method that requires the id field, which you can get (in a rather roundabout way) through item.target.id from results' work_item_relations. The code below is added on.
for item in results.work_item_relations:
id = item.target.id
work_item = wit_client.get_work_item(id)
fields = work_item.fields
This gets the id from every work item in your result class and then grants access to the fields of that work item, which you can access by fields.get("System.Title"), etc.
I would like update my table through Python using SQLAlchemy. Since the table I would like to update is not in the default schema, I referred to this question to set the session by sess.execute("SET search_path TO client1").
The whole code example is shown as follows:
session = DBSession()
session.execute("SET search_path TO client1")
session.commit()
total_rows = session.query(table).all()
for row in total_rows:
try:
row.attr1 = getAttr1()
row.attr2 = getAttr2()
session.commit()
except Exception as inst:
print(inst)
session.rollback()
Though my code can update the table at the beginning, after several hundreds of iterations (around 500 maybe?) it will throw the exception that the relation table does not exist. My current solution is to iterate my code several times with 500 records updated each time. But I think it is not a perfect solution to this problem and I am still looking forward to finding out the reason that cause this exception.
I'm looking for a complete example of using select for update in SQLAlchemy, but haven't found one googling. I need to lock a single row and update a column, the following code doesn't work (blocks forever):
s = table.select(table.c.user=="test",for_update=True)
# Do update or not depending on the row
u = table.update().where(table.c.user=="test")
u.execute(email="foo")
Do I need a commit? How do I do that? As far as I know you need to:
begin transaction
select ... for update
update
commit
If you are using the ORM, try the with_for_update function:
foo = session.query(Foo).filter(Foo.id==1234).with_for_update().one()
# this row is now locked
foo.name = 'bar'
session.add(foo)
session.commit()
# this row is now unlocked
Late answer, but maybe someone will find it useful.
First, you don't need to commit (at least not in-between queries, which I'm assuming you are asking about). Your second query hangs indefinitely, because you are effectively creating two concurrent connections to the database. First one is obtaining lock on selected records, then second one tries to modify locked records. So it can't work properly. (By the way in the example given you are not calling first query at all, so I'm assuming in your real tests you did something like s.execute() somewhere). So to the point—working implementation should look more like:
s = conn.execute(table.select(table.c.user=="test", for_update=True))
u = conn.execute(table.update().where(table.c.user=="test"), {"email": "foo"})
conn.commit()
Of course in such simple case there's no reason to do any locking but I guess it is example only and you were planning to add some additional logic between those two calls.
Yes, you do need to commit, which you can execute on the Engine or create a Transaction explicitely. Also the modifiers are specified in the values(...) method, and not execute:
>>> conn.execute(users.update().
... where(table.c.user=="test").
... values(email="foo")
... )
>>> my_engine.commit()
I've run into another problem with SQLAlchemy. I have a relationship that is suppose to cascade delete some data from my model declared like so :
parentProject = relationship(Project, backref=backref("OPERATIONS", cascade="all,delete"))
This works fine as long as the data is from the current session. But if I start a session , add some data then close. Start another session and try to delete data from the previous one , the cascade doesnt work. The initializer of the database is as follows:
if isDBEmpty:
LOGGER.info("Initializing Database")
session = dao.Session()
model.Base.metadata.create_all(dao.Engine)
session.commit()
LOGGER.info("Database Default Tables created successfully!")
dao.storeEntity(model.User(administrator_username, md5(administrator_password).hexdigest(), administrator_email, True, model.ROLE_ADMINISTRATOR))
LOGGER.info("Database Default Generic Values were stored!")
else:
LOGGER.info("Database already has some data, will not be re-created at this startup!")
I'm guessing I'm missing something very basic here. Some help would be very appreciated.
Regards,
Bogdan