I have run a few trials and there seems to be some improvement in speed if I set autocommit to False.
However, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first?
Or, am I completely mistaken as to what commit actually does?
Note: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?
There are some situations which trigger an implicit commit. However under most situations not commiting means data will be unavailable to other connections.
It also means that if another connection tries to perform an action that conflicts with an ongoing transaction (another connection locked that resource) the last request will have to wait for the lock to be released.
As for performance concerns, autocommit causes every change to be immediate. The performance hit will be quite noticeable on big tables as on each commit indexes and constraints need to be updated/checked too. If you only commit after a series of queries, indexes/constraints will only be updated at that time.
On the other hand, not committing frequently enough might cause the server to have too much work trying to maintain consistency between the two sets of data. So there is a trade-off.
And yes, using InnoDB makes a difference. If you were using for instance MyISAM you wouldn't have transactions at all, so any change will be permanent (similarly to autocommit=True). On MyISAM you can play with the delay-key-write option.
For more information about transactions have a look at the official documentation. For more tips about optimization have a look at this article.
The default transaction mode for InnoDB is REPEATABLE READ, all the read will be consistent within a transaction. If you insert rows and query them in the same transaction, you will not see the newly inserted row, but they will be stored when you commit the transaction. If you want to see the newly inserted row before you commit the transaction, you can set the isolation level to READ COMMITTED.
As long as you use the same connection, the database should show you a consistent view on the data, e.g. with all changes made so far in this transaction.
Once you commit, the changes will be written to disk and be visible to other (new) transactions and connections.
Related
I'm working on a project that is using Psycopg2 to connect to a Postgre database in python, and in the project there is a place where all the modifications to the database are being committed after performing a certain amount of insertions.
So i have a basic question:
Is there any benefit to committing in smaller chunks, or is it the same as waiting to commit until the end?
for example, say im going to insert 7,000 rows of data, should i commit after inserting 1,000 or just wait until all the data is added?
If there is problems with large commits what are they? could i possibly crash the database or something? or cause some sort of overflow?
Unlike some other database systems, there is no problem with modifying arbitrarily many rows in a single transaction.
Usually it is better to do everything in a single transaction, so that the whole thing succeeds or fails, but there are two considerations:
If the transaction takes a lot of time, it will keep VACUUM from doing its job on the rest of the database for the duration of the transaction. That may cause table bloat if there is a lot of concurrent write activity.
It may be useful to do the operation in batches if you expect many failures and don't want to restart from the beginning every time.
I've been trying to test this out, but haven't been able to come to a definitive answer. I'm using SQLAlchemy on top of MySQL and trying to prevent having threads that do a select, get a SHARED_READ lock on some table, and then hold on to it (preventing future DDL operations until it's released). This happens when queries aren't committed. I'm using SQLAlchemy Core, where as far as I could tell .execute() essentially works in autocommit mode, issuing a COMMIT after everything it runs unless explicitly told we're in a transaction. Nevertheless, in show processlist, I'm seeing sleeping threads that still have SHARED_READ locks on a table they once queried. What gives?
Assuming from your post you're operating in "non-transactional" mode, either using an SQLAlchemy Connection without an ongoing transaction, or the shorthand engine.execute(). In this mode of operation SQLAlchemy will detect INSERT, UPDATE, DELETE, and DDL statements and issue a commit after automatically, but not for everything, like SELECT statements. See "Understanding Autocommit". For selects of mutating stored procedures and such that do require a commit, use
conn.execute(text('SELECT ...').execution_options(autocommit=True))
You should also consider closing connections when the thread is done with them for the time being. Closing will call rollback() on the underlying DBAPI connection, which per PEP-0249 is (probably) always in transactional state. This will remove the transactional state and/or locks, and returns the connection to the connection pool. This way you shouldn't need to worry about selects not autocommitting.
From the docs of atomic()
atomic blocks can be nested
This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with #atomic() gets left successfully.
Is there a way to ensure durability in django's transaction handling?
Background
Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D".
Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable.
I use PostgreSQL, but AFAIK this should not matter much.
You can't do that through any API.
Transactions can't be nested while retaining all ACID properties, and not all databases support nested transactions.
Only the outermost atomic block creates a transaction. Inner atomic blocks create a savepoint inside the transaction, and release or roll back the savepoint when exiting the inner block. As such, inner atomic blocks provide atomicity, but as you noted, not e.g. durability.
Since the outermost atomic block creates a transaction, it must provide atomicity, and you can't commit a nested atomic block to the database if the containing transaction is not committed.
The only way to ensure that the inner block is committed, is to make sure that the code in the transaction finishes executing without any errors.
I agree with knbk's answer that it is not possible: durability is only present at the level of a transaction, and atomic provides that. It does not provide it at the level of save points. Depending on the use case, there may be workarounds.
I'm guessing your use case is something like:
#atomic # possibly implicit if ATOMIC_REQUESTS is enabled
def my_view():
run_some_code() # It's fine if this gets rolled back.
charge_a_credit_card() # It's not OK if this gets rolled back.
run_some_more_code() # This shouldn't roll back the credit card.
I think you'd want something like:
#transaction.non_atomic_requests
def my_view():
with atomic():
run_some_code()
with atomic():
charge_a_credit_card()
with atomic():
run_some_more_code()
If your use case is for credit cards specifically (as mine was when I had this issue a few years ago), my coworker discovered that credit card processors actually provide mechanisms for handling this. A similar mechanism might work for your use case, depending on the problem structure:
#atomic
def my_view():
run_some_code()
result = charge_a_credit_card(capture=False)
if result.successful:
transaction.on_commit(lambda: result.capture())
run_some_more_code()
Another option would be to use a non-transactional persistence mechanism for recording what you're interested in, like a log database, or a redis queue of things to record.
This type of durability is impossible due to ACID, with one connection. (i.e. that a nested block stays committed while the outer block get rolled back) It is a consequence of ACID, not a problem of Django. Imagine a super database and the case that table B has a foreign key to table A.
CREATE TABLE A (id serial primary key);
CREATE TABLE B (id serial primary key, b_id integer references A (id));
-- transaction
INSERT INTO A DEFAULT VALUES RETURNING id AS new_a_id
-- like it would be possible to create an inner transaction
INSERT INTO B (a_id) VALUES (new_a_id)
-- commit
-- rollback (= integrity problem)
If the inner "transaction" should be durable while the (outer) transaction get rolled back then the integrity would be broken. The rollback operation must be always implemented so that it can never fail, therefore no database would implement a nested independent transaction. It would be against the principle of causality and the integrity can not be guarantied after such selective rollback. It is also against atomicity.
The transaction is related to a database connection. If you create two connections then two independent transactions are created. One connection doesn't see uncommitted rows of other transactions (it is possible to set this isolation level, but it depends on the database backend) and no foreign keys to them can be created and the integrity is preserved after rollback by the database backend design.
Django supports multiple databases, therefore multiple connections.
# no ATOMIC_REQUESTS should be set for "other_db" in DATABASES
#transaction.atomic # atomic for the database "default"
def my_view():
with atomic(): # or set atomic() here, for the database "default"
some_code()
with atomic("other_db"):
row = OtherModel.objects.using("other_db").create(**kwargs)
raise DatabaseError
The data in "other_db" stays committed.
It is probably possible in Django to create a trick with two connections to the same database like it would be two databases, with some database backends, but I'm sure that it is untested, it would be prone to mistakes, with problems with migrations, bigger load by the database backend that must create real parallel transactions at every request and it can not be optimized. It is better to use two real databases or to reorganize the code.
The setting DATABASE_ROUTERS is very useful, but I'm not sure yet if you are interested in multiple connections.
Even though this exact behaviour is not possible, since django 3.2 there is a durable=True[#transaction.atomic(durable=True)] option to make sure that such a block of code isnt nested, so that by chance if such code is run as nested it results in a RuntimeError error.
https://docs.djangoproject.com/en/dev/topics/db/transactions/#django.db.transaction.atomic
An article on this issue https://seddonym.me/2020/11/19/trouble-atomic/
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions):
I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous.
My problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements.
I haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either.
Is there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate/re-run my statements?
Thank you all in advance,
George.
I doubt you'll find a fast cross-database way to do this. You just have to optimize the balance between the speed gains from batch size and the speed costs of repeating work when an entry causes a batch to fail.
Some DBs can continue with a transaction after an error, but PostgreSQL can't. However, it does allow you to create subtransactions with the SAVEPOINT command. These are far from free, but they're lower cost than a full transaction. So what you can do is every (say) 100 rows, issue a SAVEPOINT and then release the prior savepoint. If you hit an error, ROLLBACK TO SAVEPOINT, commit, then pick up where you left off.
If you are committing your transactions after every 5000 record interval, it seems like you could do a little bit of preprocessing of your input data and actually break it out into a list of 5000 record chunks, i.e. [[[row1_data],[row2_data]...[row4999_data]],[[row5000_data],[row5001_data],...],[[....[row1000000_data]]]
Then run your inserts, and keep track of which chunk you are processing as well as which record you are currently inserting. When you get the error, you rerun the chunk, but skip the the offending record.
What the difference is between flush() and commit() in SQLAlchemy?
I've read the docs, but am none the wiser - they seem to assume a pre-understanding that I don't have.
I'm particularly interested in their impact on memory usage. I'm loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over - it's a large database and a machine with not much memory.
I'm wondering if I'm using too many commit() and not enough flush() calls - but without really understanding what the difference is, it's hard to tell!
A Session object is basically an ongoing transaction of changes to a database (update, insert, delete). These operations aren't persisted to the database until they are committed (if your program aborts for some reason in mid-session transaction, any uncommitted changes within are lost).
The session object registers transaction operations with session.add(), but doesn't yet communicate them to the database until session.flush() is called.
session.flush() communicates a series of operations to the database (insert, update, delete). The database maintains them as pending operations in a transaction. The changes aren't persisted permanently to disk, or visible to other transactions until the database receives a COMMIT for the current transaction (which is what session.commit() does).
session.commit() commits (persists) those changes to the database.
flush() is always called as part of a call to commit() (1).
When you use a Session object to query the database, the query will return results both from the database and from the flushed parts of the uncommitted transaction it holds. By default, Session objects autoflush their operations, but this can be disabled.
Hopefully this example will make this clearer:
#---
s = Session()
s.add(Foo('A')) # The Foo('A') object has been added to the session.
# It has not been committed to the database yet,
# but is returned as part of a query.
print 1, s.query(Foo).all()
s.commit()
#---
s2 = Session()
s2.autoflush = False
s2.add(Foo('B'))
print 2, s2.query(Foo).all() # The Foo('B') object is *not* returned
# as part of this query because it hasn't
# been flushed yet.
s2.flush() # Now, Foo('B') is in the same state as
# Foo('A') was above.
print 3, s2.query(Foo).all()
s2.rollback() # Foo('B') has not been committed, and rolling
# back the session's transaction removes it
# from the session.
print 4, s2.query(Foo).all()
#---
Output:
1 [<Foo('A')>]
2 [<Foo('A')>]
3 [<Foo('A')>, <Foo('B')>]
4 [<Foo('A')>]
This does not strictly answer the original question but some people have mentioned that with session.autoflush = True you don't have to use session.flush()... And this is not always true.
If you want to use the id of a newly created object in the middle of a transaction, you must call session.flush().
# Given a model with at least this id
class AModel(Base):
id = Column(Integer, primary_key=True) # autoincrement by default on integer primary key
session.autoflush = True
a = AModel()
session.add(a)
a.id # None
session.flush()
a.id # autoincremented integer
This is because autoflush does NOT auto fill the id (although a query of the object will, which sometimes can cause confusion as in "why this works here but not there?" But snapshoe already covered this part).
One related aspect that seems pretty important to me and wasn't really mentioned:
Why would you not commit all the time? - The answer is atomicity.
A fancy word to say: an ensemble of operations have to all be executed successfully OR none of them will take effect.
For example, if you want to create/update/delete some object (A) and then create/update/delete another (B), but if (B) fails you want to revert (A). This means those 2 operations are atomic.
Therefore, if (B) needs a result of (A), you want to call flush after (A) and commit after (B).
Also, if session.autoflush is True, except for the case that I mentioned above or others in Jimbo's answer, you will not need to call flush manually.
Use flush when you need to simulate a write, for example to get a primary key ID from an autoincrementing counter.
john=Person(name='John Smith', parent=None)
session.add(john)
session.flush()
son=Person(name='Bill Smith', parent=john.id)
Without flushing, john.id would be null.
Like others have said, without commit() none of this will be permanently persisted to DB.
Why flush if you can commit?
As someone new to working with databases and sqlalchemy, the previous answers - that flush() sends SQL statements to the DB and commit() persists them - were not clear to me. The definitions make sense but it isn't immediately clear from the definitions why you would use a flush instead of just committing.
Since a commit always flushes (https://docs.sqlalchemy.org/en/13/orm/session_basics.html#committing) these sound really similar. I think the big issue to highlight is that a flush is not permanent and can be undone, whereas a commit is permanent, in the sense that you can't ask the database to undo the last commit (I think)
#snapshoe highlights that if you want to query the database and get results that include newly added objects, you need to have flushed first (or committed, which will flush for you). Perhaps this is useful for some people although I'm not sure why you would want to flush rather than commit (other than the trivial answer that it can be undone).
In another example I was syncing documents between a local DB and a remote server, and if the user decided to cancel, all adds/updates/deletes should be undone (i.e. no partial sync, only a full sync). When updating a single document I've decided to simply delete the old row and add the updated version from the remote server. It turns out that due to the way sqlalchemy is written, order of operations when committing is not guaranteed. This resulted in adding a duplicate version (before attempting to delete the old one), which resulted in the DB failing a unique constraint. To get around this I used flush() so that order was maintained, but I could still undo if later the sync process failed.
See my post on this at: Is there any order for add versus delete when committing in sqlalchemy
Similarly, someone wanted to know whether add order is maintained when committing, i.e. if I add object1 then add object2, does object1 get added to the database before object2
Does SQLAlchemy save order when adding objects to session?
Again, here presumably the use of a flush() would ensure the desired behavior. So in summary, one use for flush is to provide order guarantees (I think), again while still allowing yourself an "undo" option that commit does not provide.
Autoflush and Autocommit
Note, autoflush can be used to ensure queries act on an updated database as sqlalchemy will flush before executing the query. https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.params.autoflush
Autocommit is something else that I don't completely understand but it sounds like its use is discouraged:
https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.params.autocommit
Memory Usage
Now the original question actually wanted to know about the impact of flush vs. commit for memory purposes. As the ability to persist or not is something the database offers (I think), simply flushing should be sufficient to offload to the database - although committing shouldn't hurt (actually probably helps - see below) if you don't care about undoing.
sqlalchemy uses weak referencing for objects that have been flushed: https://docs.sqlalchemy.org/en/13/orm/session_state_management.html#session-referencing-behavior
This means if you don't have an object explicitly held onto somewhere, like in a list or dict, sqlalchemy won't keep it in memory.
However, then you have the database side of things to worry about. Presumably flushing without committing comes with some memory penalty to maintain the transaction. Again, I'm new to this but here's a link that seems to suggest exactly this: https://stackoverflow.com/a/15305650/764365
In other words, commits should reduce memory usage, although presumably there is a trade-off between memory and performance here. In other words, you probably don't want to commit every single database change, one at a time (for performance reasons), but waiting too long will increase memory usage.
The existing answers don't make a lot of sense unless you understand what a database transaction is. (Twas the case for myself until recently.)
Sometimes you might want to run multiple SQL statements and have them succeed or fail as a whole. For example, if you want to execute a bank transfer from account A to account B, you'll need to do two queries like
UPDATE accounts SET value = value - 100 WHERE acct = 'A'
UPDATE accounts SET value = value + 100 WHERE acct = 'B'
If the first query succeeds but the second fails, this is bad (for obvious reasons). So, we need a way to treat these two queries "as a whole". The solution is to start with a BEGIN statement and end with either a COMMIT statement or a ROLLBACK statement, like
BEGIN
UPDATE accounts SET value = value - 100 WHERE acct = 'A'
UPDATE accounts SET value = value + 100 WHERE acct = 'B'
COMMIT
This is a single transaction.
In SQLAlchemy's ORM, this might look like
# BEGIN issued here
acctA = session.query(Account).get(1) # SELECT issued here
acctB = session.query(Account).get(2) # SELECT issued here
acctA.value -= 100
acctB.value += 100
session.commit() # UPDATEs and COMMIT issued here
If you monitor when the various queries get executed, you'll see the UPDATEs don't hit the database until you call session.commit().
In some situations you might want to execute the UPDATE statements before issuing a COMMIT. (Perhaps the database issues an auto-incrementing id to the object and you want to fetch it before COMMITing). In these cases, you can explicitly flush() the session.
# BEGIN issued here
acctA = session.query(Account).get(1) # SELECT issued here
acctB = session.query(Account).get(2) # SELECT issued here
acctA.value -= 100
acctB.value += 100
session.flush() # UPDATEs issued here
session.commit() # COMMIT issued here
commit () records these changes in the database. flush () is always called as part of the commit () (1) call. When you use a Session object to query a database, the query returns results from both the database and the reddened parts of the unrecorded transaction it is performing.
For simple orientation:
commit makes real changes (they become visible in the database)
flush makes fictive changes (they become visible just for you)
Imagine that databases work like git-branching.
First you have to understand that during a transaction you are not manipulating the real database data.
Instead, you get something like a new branch, and there you play around.
If at some point you write the command commit, that means: "merge my data-changes into main DB data".
But if you need some future data, that you can get only after commit (ex. insert into a table, and you need the inserted PKID), then you use the flush command, meaning: "calculate me the future PKID, and reserve it for me".
Then you can use that PKID value further in you code and be sure that the real data will be as expected.
Commit must always come at the end, to merge into main DB data.