We had a problem with a website which uses Django. Each time we upgrade Django, if a user is logged in with two or more different browsers, and then they login again from one browser - they are automatically logged out from all other sessions (browsers). Since we upgraded Django to new major versions about 5 times in the last year, this caused us a headache. We don't want to force users to have to login again and again between sessions. How can we solve this problem?
We checked and found out that this problem is caused due to a change in PBKDF2PasswordHasher.iterations between versions of Django. Each time we upgrade Django to a new major version (such as from 3.0 to 3.1), PBKDF2PasswordHasher.iterations changes. This causes the user's hashed password to be calculated again the next time the user logs in, which forces the user to remain logged out in all other sessions. I even created a ticket with Django's tracking system.
There are two options to fix this issue. First, we can patch class PBKDF2PasswordHasher to keep the number of iterations constant, and also update def must_update:
from django.contrib.auth.hashers import PBKDF2PasswordHasher
def patch():
def must_update(self, encoded):
# Update the stored password only if the iterations diff is at least 250,000.
algorithm, iterations, salt, hash = encoded.split('$', 3)
iterations_diff = abs(self.iterations - int(iterations))
return ((int(iterations) != self.iterations) and (iterations_diff >= 250000))
PBKDF2PasswordHasher.iterations = 180000 # Django 3.0.x
PBKDF2PasswordHasher.must_update = must_update
And then in our base AppConfig class:
class SpeedyCoreBaseConfig(AppConfig):
name = 'speedy.core.base'
verbose_name = _("Speedy Core Base App")
label = 'base'
def ready(self):
locale_patches.patch() # Another patch
session_patches.patch() # This patch
Or, you can inherit a new class from PBKDF2PasswordHasher, change iterations and def must_update, and use your new class in the settings (PASSWORD_HASHERS). We used the first option, although it might be better to use the second option (inherit a new class) from a software engineering perspective. They both work.
Related
Project is in PYTHON
I want to log all current users affected by some random "IndexError" (For instance) error.
Currently when I try to do this:
from sentry_sdk import push_scope, capture_exception
with push_scope() as scope:
scope.user = {"email": user.email}
capture_exception(ex)
it sets first user's email and that doesn't change with other users facing the same error.
I want to see all users who were affected by that error.
I have a simple library application. In order to force 3 actions to commit as one action, and rollback if any of the actions fail, I made the following code changes:
In settings.py:
AUTOCOMMIT=False
In forms.py
from django.db import IntegrityError, transaction
class CreateLoan(forms.Form):
#Fields...
def save(self):
id_book = form.cleaned_data.get('id_book', None)
id_customer = form.cleaned_data.get('id_customer', None)
start_date = form.cleaned_data.get('start_date', None)
book = Book.objects.get(id=id_book)
customer = Customer.objects.get(id=id_customer)
new_return = Return(
book=book
start_date=start_date)
txn=Loan_Txn(
customer=customer,
book=book,
start_date=start_date
)
try
with transaction.atomic():
book.update(status="ON_LOAN")
new_return.save(force_insert=True)
txn.save(force_insert=True)
except IntegrityError:
raise forms.ValidationError("Something occured. Please try again")
Am I still missing anything with regards to this? I'm using Django 1.9 with Python 3.4.3 and the database is MySQL.
You're using transaction.atomic() correctly (including putting the try ... except outside the transaction) but you should definitely not be setting AUTOCOMMIT = False.
As the documentation states, you set that system-wide setting to False when you want to "disable Django’s transaction management"—but that's clearly not what you want to do, since you're using transaction.atomic()! More from the documentation:
If you do this, Django won’t enable autocommit, and won’t perform any commits. You’ll get the regular behavior of the underlying database library.
This requires you to commit explicitly every transaction, even those started by Django or by third-party libraries. Thus, this is best used in situations where you want to run your own transaction-controlling middleware or do something really strange.
So just don't do that. Django will of course disable autocommit for that atomic block and re-enable it when the block finishes.
I've updated django from 1.6 to 1.8.3. I create test models in test setUp method in unit tests, something like that
class MyTestCase(LiveServerTestCase):
reset_sequences = True
def setUp(self):
self.my_model = models.MyModel.objects.create(name='test)
And I have code in the application, which relies on the primary key == 1. I've noted, that sequences aren't actually reseted. In each next test the pk is greater, that in previous.
This works ok in django 1.6, but after migration to 1.8 problems appears.
Should I reset sequence manually?
P.s. I know about fixtures, but my models are more complicated and for me it's easier to create models in the code.
The problem was in sqlite3. The tests have been run with other settings file, where sqlite3 is configured as database.
The django checks, if the database supports sequences:
# django.test.testcases.py:809
def _reset_sequences(self, db_name):
conn = connections[db_name]
if conn.features.supports_sequence_reset:
sql_list = conn.ops.sequence_reset_by_name_sql(
no_style(), conn.introspection.sequence_list())
# ....
So I've switched test settings to the postgresql and now it works normally
I want to generate a password reset token for a User model that I have with Google App Engine. Apparently we're not allowed to use Django that easily with GAE, so the raw code for the Django method for generating tokens is:
def _make_token_with_timestamp(self, user, timestamp):
# timestamp is number of days since 2001-1-1. Converted to
# base 36, this gives us a 3 digit string until about 2121
ts_b36 = int_to_base36(timestamp)
# By hashing on the internal state of the user and using state
# that is sure to change (the password salt will change as soon as
# the password is set, at least for current Django auth, and
# last_login will also change), we produce a hash that will be
# invalid as soon as it is used.
# We limit the hash to 20 chars to keep URL short
key_salt = "django.contrib.auth.tokens.PasswordResetTokenGenerator"
# Ensure results are consistent across DB backends
login_timestamp = user.last_login.replace(microsecond=0, tzinfo=None)
value = (unicode(user.id) + user.password +
unicode(login_timestamp) + unicode(timestamp))
hash = salted_hmac(key_salt, value).hexdigest()[::2]
return "%s-%s" % (ts_b36, hash)
Python is not my language of expertise, so I'll need some help writing a custom method similar to the one above. I just have a couple questions. First, what is the purpose of the timestamp? And Django has its own User system, while I'm using a simple custom User model of my own. What aspects from the above code will I need to retain, and which ones can I do away with?
well, the check_token-method looks like this:
def check_token(self, user, token):
"""
Check that a password reset token is correct for a given user.
"""
# Parse the token
try:
ts_b36, hash = token.split("-")
except ValueError:
return False
try:
ts = base36_to_int(ts_b36)
except ValueError:
return False
# Check that the timestamp/uid has not been tampered with
if not constant_time_compare(self._make_token_with_timestamp(user, ts), token):
return False
# Check the timestamp is within limit
if (self._num_days(self._today()) - ts) > settings.PASSWORD_RESET_TIMEOUT_DAYS:
return False
return True
first the timestamp part of the token is converted back to integer
then a new token is generated using that timestamp and compared to the old token. Note that when generating a token the timestamp of the last login is one of the parameters used to calculate the hash. That means that after a user login the old token would become invalid, which makes sense for a password reset token.
lastly a check is performed to see if the token hasn't alerady timed out.
it's a fairly simple process, and also fairly secure. If you wanted to use the reset-system to break into an account, you'd have to know the user's password and last login timestamp to calculate the hash. And if you knew that wouldn't need to break into the account...
So if you want to make a system like that, it's important when generating the hast to use parameters that are not easy to guess, and of course to use a good, salted hash function. Django uses sha1, using other hashlib digests would of course be easily possible.
Another way would be to generate a random password reset token and store it in the database, but this potentially wastes a lot of space as the token-column would probably be empty for most of the users.
I have written a few Python tools in the past to extract data from my Outlook contacts. Now, I am trying to modify my Outlook Contacts. I am finding that my changes are being noted by Outlook, but they aren't sticking. I seem to be updating some cache, but not the real record.
The code is straightforward.
import win32com.client
import pywintypes
o = win32com.client.Dispatch("Outlook.Application")
ns = o.GetNamespace("MAPI")
profile = ns.Folders.Item("My Profile Name")
contacts = profile.Folders.Item("Contacts")
contact = contacts.Items[43] # Grab a random contact, for this example.
print "About to overwrite ",contact.FirstName, contact.LastName
contact.categories = 'Supplier' # Override the categories
# Edit: I don't always do these last steps.
ns = None
o = None
At this point, I change over to Outlook, which is opened to the Detailed Address Cards view.
I look at the contact summary (without opening it) and the category is unchanged (not refreshed?).
I open the contact and its category HAS changed, sometimes. (Not sure of when, but it feels like it is cache related.) If it has changed, it prompts me to Save Changes when I close it which is odd, because I haven't changed anything in the Outlook UI.
If I quit and restart Outlook, the changes are gone.
I suspect I am failing to call SaveChanges, but I can't see which object supports it.
So my question is:
Should I be calling SaveChanges? If so, where is it?
Am I making some other silly mistake, which is causing my data to be discarded?
I believe there is a .Save() method on the contact, so you need to add:
contact.Save()