Control Atomic Transactions in Django - python

I have a simple library application. In order to force 3 actions to commit as one action, and rollback if any of the actions fail, I made the following code changes:
In settings.py:
AUTOCOMMIT=False
In forms.py
from django.db import IntegrityError, transaction
class CreateLoan(forms.Form):
#Fields...
def save(self):
id_book = form.cleaned_data.get('id_book', None)
id_customer = form.cleaned_data.get('id_customer', None)
start_date = form.cleaned_data.get('start_date', None)
book = Book.objects.get(id=id_book)
customer = Customer.objects.get(id=id_customer)
new_return = Return(
book=book
start_date=start_date)
txn=Loan_Txn(
customer=customer,
book=book,
start_date=start_date
)
try
with transaction.atomic():
book.update(status="ON_LOAN")
new_return.save(force_insert=True)
txn.save(force_insert=True)
except IntegrityError:
raise forms.ValidationError("Something occured. Please try again")
Am I still missing anything with regards to this? I'm using Django 1.9 with Python 3.4.3 and the database is MySQL.

You're using transaction.atomic() correctly (including putting the try ... except outside the transaction) but you should definitely not be setting AUTOCOMMIT = False.
As the documentation states, you set that system-wide setting to False when you want to "disable Django’s transaction management"—but that's clearly not what you want to do, since you're using transaction.atomic()! More from the documentation:
If you do this, Django won’t enable autocommit, and won’t perform any commits. You’ll get the regular behavior of the underlying database library.
This requires you to commit explicitly every transaction, even those started by Django or by third-party libraries. Thus, this is best used in situations where you want to run your own transaction-controlling middleware or do something really strange.
So just don't do that. Django will of course disable autocommit for that atomic block and re-enable it when the block finishes.

Related

How we patched Django to keep users logged in between sessions

We had a problem with a website which uses Django. Each time we upgrade Django, if a user is logged in with two or more different browsers, and then they login again from one browser - they are automatically logged out from all other sessions (browsers). Since we upgraded Django to new major versions about 5 times in the last year, this caused us a headache. We don't want to force users to have to login again and again between sessions. How can we solve this problem?
We checked and found out that this problem is caused due to a change in PBKDF2PasswordHasher.iterations between versions of Django. Each time we upgrade Django to a new major version (such as from 3.0 to 3.1), PBKDF2PasswordHasher.iterations changes. This causes the user's hashed password to be calculated again the next time the user logs in, which forces the user to remain logged out in all other sessions. I even created a ticket with Django's tracking system.
There are two options to fix this issue. First, we can patch class PBKDF2PasswordHasher to keep the number of iterations constant, and also update def must_update:
from django.contrib.auth.hashers import PBKDF2PasswordHasher
def patch():
def must_update(self, encoded):
# Update the stored password only if the iterations diff is at least 250,000.
algorithm, iterations, salt, hash = encoded.split('$', 3)
iterations_diff = abs(self.iterations - int(iterations))
return ((int(iterations) != self.iterations) and (iterations_diff >= 250000))
PBKDF2PasswordHasher.iterations = 180000 # Django 3.0.x
PBKDF2PasswordHasher.must_update = must_update
And then in our base AppConfig class:
class SpeedyCoreBaseConfig(AppConfig):
name = 'speedy.core.base'
verbose_name = _("Speedy Core Base App")
label = 'base'
def ready(self):
locale_patches.patch() # Another patch
session_patches.patch() # This patch
Or, you can inherit a new class from PBKDF2PasswordHasher, change iterations and def must_update, and use your new class in the settings (PASSWORD_HASHERS). We used the first option, although it might be better to use the second option (inherit a new class) from a software engineering perspective. They both work.

Django reset_sequences doesn't work in LiveServerTestCase

I've updated django from 1.6 to 1.8.3. I create test models in test setUp method in unit tests, something like that
class MyTestCase(LiveServerTestCase):
reset_sequences = True
def setUp(self):
self.my_model = models.MyModel.objects.create(name='test)
And I have code in the application, which relies on the primary key == 1. I've noted, that sequences aren't actually reseted. In each next test the pk is greater, that in previous.
This works ok in django 1.6, but after migration to 1.8 problems appears.
Should I reset sequence manually?
P.s. I know about fixtures, but my models are more complicated and for me it's easier to create models in the code.
The problem was in sqlite3. The tests have been run with other settings file, where sqlite3 is configured as database.
The django checks, if the database supports sequences:
# django.test.testcases.py:809
def _reset_sequences(self, db_name):
conn = connections[db_name]
if conn.features.supports_sequence_reset:
sql_list = conn.ops.sequence_reset_by_name_sql(
no_style(), conn.introspection.sequence_list())
# ....
So I've switched test settings to the postgresql and now it works normally

Delete Query in web2py not working

This is the function for deleting a record from database.
def pro_del():
d = request.get_vars.d
db(db.products.product_id == d).delete()
session.flash = "Product Deleted"
redirect(URL('default','index'))
#return locals()
The id is successfully getting passed to the function by get_vars(means d is getting its value). I checked it by returning locals.
The redirection is also working fine. Its also flashing the message.
Just the query is not working. The record is not getting deleted from the database.
Note:'d' is alphanumeric here
From web2py's DAL documentation:
No create, drop, insert, truncate, delete, or update operation is actually committed until web2py issues the commit command. In models, views and controllers, web2py does this for you, but in modules you are required to do the commit.
Have you tried db.commit() after your .delete() ?

django: Proper way to recover from IntegrityError

What's the proper way to recover from an IntegrityError, or any other errors that could leave my transactions screwed up without using manual transaction control?
In my application, I'm running into problems with IntegrityErrors that I want to recover from, that screw up later database activity, leaving me with:
DatabaseError: current transaction is aborted, commands ignored until end of transaction block`
for all database activity after ignoring IntegrityErrors.
This block of code should reproduce the error I'm seeing
from django.db import transaction
try:
MyModel.save() # Do a bad save that will raise IntegrityError
except IntegrityError:
pass
MyModel.objects.all() # raises DatabaseError: current transaction is aborted, commands ignored until end of transaction block
According to the docs, the solution to recover from an IntegrityError is by rolling back the transaction. But the following code results in a TransactionManagementError.
from django.db import transaction
try:
MyModel.save()
except IntegrityError:
transaction.rollback() # raises TransactionManagementError: This code isn't under transaction management
MyModel.objects.all() # Should work
EDIT: I'm confused by the message from the TransactionManagementError, because if in my except I do a:
connection._cursor().connection.rollback()
instead of the django transaction.rollback(), the MyModel.objects.all() succeeds, which doesn't make sense if my code "isn't under transaction management". It also doesn't make sense that code that isn't under transaction management (which I assume means it's using autocommit), can have transactions that span multiple queries.
EDIT #2: I'm aware of using manual transaction control to be able to recover from these errors, but shouldn't I be able to recover without manual transaction control? My understanding is that if I'm using autocommit, there should only be one write per transaction, so it should not affect later database activity.
EDIT #3: This is a couple years later, but in django 1.4 (not sure about later versions), another issue here was that Model.objects.bulk_create() doesn't honor autocommit behavior.
Versions:
Django: 1.4 (TransactionMiddleWare is not enabled)
Python: 2.7
Postgres: 9.1
Django's default commit mode is AutoCommit. In order to do rollback, you need to wrap the code doing the work in a transaction. [docs]
with transaction.commit_on_success():
# Your code here. Errors will auto-rollback.
To get database level autocommit, you will require the following option in your DATABASES settings dictionary.
'OPTIONS': {'autocommit': True,}
Alternately, you can use explicit savepoints to roll back to. [docs]
#transaction.commit_manually
def viewfunc(request):
a.save()
# open transaction now contains a.save()
sid = transaction.savepoint()
b.save()
# open transaction now contains a.save() and b.save()
if want_to_keep_b:
transaction.savepoint_commit(sid)
# open transaction still contains a.save() and b.save()
else:
transaction.savepoint_rollback(sid)
# open transaction now contains only a.save()
transaction.commit()

connecting sqlalchemy to MSAccess

How can I connect to MS Access with SQLAlchemy? In their website, it says connection string is access+pyodbc. Does that mean that I need to have pyodbc for the connection? Since I am a newbie, please be gentle.
In theory this would be via create_engine("access:///some_odbc_dsn"), but the Access backend hasn't been in service at all since SQLAlchemy 0.5, and it's not clear how well it was working back then either (this is why it's noted as "development" at http://docs.sqlalchemy.org/en/latest/core/engines.html#supported-databases - "development" means, "a development version of the dialect exists, but is not yet usable"). There's just not enough interest/volunteers to keep this dialect running right now. (when/if it is, you'll see it at http://docs.sqlalchemy.org/en/latest/dialects/access.html).
Your best bet for Access right now would be to export the data into a SQLite database file (or of course some other database, though SQLite is file-based in a similar way at least), then use that.
Update, September 2019:
The sqlalchemy-access dialect has been resurrected. Details here.
Usage example:
engine = create_engine("access+pyodbc://#some_odbc_dsn")
I primarily needed read access and some simple queries. The latest version of sqlalchemy has the (broken) access back end modules, but it isn't registered as an entrypoint.
It needed a few fixups, but this worked for me:
def fixup_access():
import sqlalchemy.dialects.access.base
class FixedAccessDialect(sqlalchemy.dialects.access.base.AccessDialect):
def _check_unicode_returns(self, connection):
return True
def do_execute(self, cursor, statement, params, context=None, **kwargs):
if params == {}:
params = ()
super(sqlalchemy.dialects.access.base.AccessDialect, self).do_execute(cursor, statement, params, **kwargs)
class SomeObject(object):
pass
fixed_dialect_mod = SomeObject
fixed_dialect_mod.dialect = FixedAccessDialect
sqlalchemy.dialects.access.fix = fixed_dialect_mod
fixup_access()
ENGINE = sqlalchemy.create_engine('access+fix://admin#/%s'%(db_location))

Categories

Resources