Django: cannot migrate due to object reference in form - python

I am having trouble running the django migrations on a new database because of the get_or_create() function in a form.
form.py
class EmployeeForm(...):
# Make sure that we have a directory before initializing the form
default_upload_directory = Directory.objects.get_or_create(name='Employee', parent=None)[0]
This is the important part of a custom form I wrote. It will link the Employee's uploaded files to a default directory. Everything works fine except for the migration. When I run it on a new database it will complain with django.db.utils.OperationalError: no such table: filemanager_directory. It also happens if I use a standard objects.get(..) to access the Directory object.
I am not sure how to solve this. Obviously the table does not exist, yet. On an existing database with the tables already in place everything works fine.
EDIT: The migrations I am trying to run are not related to this app. Furthermore, as a workaround, you can comment out the line, run the migrations and reactivate it... But this is not a neat solution...

This is, quite simply, not a thing you should do. You shouldn't run any kind of database access in any code that runs at import time, which this does.
There should never be a need to do this in any case. If your goal is to ensure an object exists when the form is instantiated, put it in the __init__ method.
But you should also reflect that ensuring that database objects exist is the job of migrations in the first place. Write a data migration with a RunPython method to do this.

Related

Django migrations not getting applied on Postgres after changing the datatype of an attribute in models.py

I had this model where the price was an Integer Field. I did run migrations and all was fine.
from django.db import models
class Bill(models.Model):
price= models.IntegerField()
Then due to requirement changes, I had to make the price field as JSONField which would store the price value based on certain keys similar to this
price={"actual_price":100, "tax_price":20}
I made the changes in the model like below:
from django.db import models
class Bill(models.Model):
price= JSONField(blank=True, null=True)
I performed makemigrations and migrate operations, the migrations are not getting reflected in DB. There are no errors as well. I get the error "Column: price does not exist" when my code tries to read from the DB.
I tried the following things by referring to StackOverflow other questions:
Removed the field. Run migrations, applied it. Removing field works
fine. But when I add the field back instead of considering it as new
field to be added, it just doesn't get inserted.
Removing the migrations from the migrations.py folder, re-running it and re-applying it.
Please note:
I am using Postgres DB. The same thing works for SQLite DB which is in-house provided by Django but not for Postgres.
Losing the data is not a feasible option as the DB data is of production server.
I tried adding the column manually through PostGres using the query ALTER Table ADD Column which worked perfectly fine. This is a hook that I used which was used as a last resort.
The data in the initial integer field was present for some of the records before applying the updated migration. Strange thing is I also wasn't asked by Django to set the value in case I want to override the data.
I need the Django migrations for applying the changes automatically to work. Due to this issue, only the addition of new column and modifying the datatype of the column is not working at all (Rest operations like removing the column works).
Since there was no other option and no solutions provided, I had to point my application to new DB in PostGres and then it started working fine. Lesson Learnt for future:
Not to update directly the datatype of the model attribute and then apply migrations.
First Remove the field, apply migrations and then create the field with same name and required data type and then apply migrations.
Use the same Database that you are using for production in your local itself. In this way, you can avoid conflicts after deploying to production. I had used SQLite for local and PostGres for Production. From now on, I will use PostGres for Local also so that I can debug issues in my local itself.

Does adding new SQLAlchemy.model will be automatically added to my database?

If I'm going to add another SQLAlchemy model, does it automatically be added to my database? Do I need to run some commands for it to be added or checked?
It will only be added if you run the db.create_all() command again.
If your model is going to be changing frequently it might be worth getting to grips with something like Flask-Migrate which can handle database migrations really nicely.

Edit database outside Django ORM

If one is using Django, what happens with changes made directly to the database (in my case postgres) through either pgadmin or psql?
How are such changes handled by migrations? Do they take precedence over what the ORM thinks the state of affairs is, or does Django override them and impose it's own sense of change history?
Finally, how are any of these issues effected, or avoided, by git, if at all?
Thanks.
You can exclude a model completely from the django migrations, and then you are responsible to adjust the schema to the django code (or the django code to the existing schema):
class SomeModel(models.Model):
class Meta:
managed = False
db_table = "some_table_name"
name = models.Fields....
Note that you can't have it both ways, so migrations are preferred when possible. You can always define a custom SQL migration, that will save the need for external changes. However, sometimes you do need to handle the schema elsewhere instead of migrations, and then use managed=False
The migrations system does not look at your current schema at all. It builds up its picture from the graph of previous migrations and the current state of models.py. That means that if you make changes to the schema from outside this system, it will be out of sync; if you then make the equivalent change in models.py and create migrations, when you run them you will probably get an error.
For that reason, you should avoid doing this. If it's done already, you could apply the conflicting migration in fake mode, which simply marks it as done without actually running the code against the database. But it's simpler to do everything via migrations in the first place.
git has no impact on this at all, other than to reiterate that migrations are code, and should be added to your git repo.

Django: After migration, all model fields with unique=True get IntegrityError Duplicate Entry for Key

I have a model with a uniqueblogname element that is set to Unique=True. In my views, I do something like:
try:
MyModel.objects.get(uniqueblogname=userinput) #i ask the user to input
#a name of a blog they want to own on the site (all blogs must have a unique name)
#if taken then prints some error messages that the blog name is taken.....
except MyModel.DoesNotExist:
MyModel.objects.create(uniqueblogname=userinput) #if no blog has that name then
#the blog object is created
I recently updated my db by doing dumpdata and loaddata to a brand new blank db and migrating new model changes to this db (I wanted to keep the old db how it was and archive it). Everything went smoothly. New migrations were made without errors. However, now whenever I execute the above check, if I search for any old blogname then I get this error:
1062, "Duplicate entry (the user's input here) for key 'uniqueblogname'"
However, if I search for a completely new blogname and the object gets created then I search for it again using this check, everything works fine. It seems as if the code for trying to retrieve the old MyModel objects does not get properly executed and django tries to create a new MyModel only to realize that it would be a duplicate and so throw this error.
The most confusing part about this error is like I said: If I create a completely new blogname and enter it, and the object is created freshly within this new db, this check condition works every time perfectly (so I don't suspect it is a logic issue in my code). However, I can't seem to check my older stuff that I used loaddata to populate.
Any ideas? Very appreciative of any suggestions. Thanks.
It's hard to comment on your specific case without knowing more. I'd start by looking at the database itself and seeing if there's anything different about your old rows as opposed to the newly created rows.
I do want to point out that your current code has a race condition, since another process could insert a row with the same name in between the get() and the create(). I suggest using get_or_create() instead.
This method is atomic assuming correct usage, correct database configuration, and correct behavior of the underlying database.

How do I test a django database schema?

I want to write tests that can show whether or not the database is in sync with my models.py file. Actually I have already written them, only to find out that django creates a new database each time the tests are run based on the models.py file.
Is there any way I can make the models.py test use the existing database schema? The one that's in mysql/postgresql, and not the one that's in /myapp/models.py ?
I don't care about the data that's in the database, I only care about it's schema i.e. I want my tests to notice if a table in the database has less fields than the schema in my models.py file.
I'm using the unittest framework (actually the django extension to it) if this has any relevance.
thanks
What we did was override the default test_runner so that it wouldn't create a new database to test against. This way, it runs the test against whatever our current local database looks like. But be very careful if you use this method because any changes to data you make in your tests will be permanent. I made sure that all our tests restores any changes back to their original state, and keep our pristine version of our database on the server and backed up.
So to do this you need to copy the run_test method from django.test.simple to a location in your project -- I put mine in myproject/test/test_runner.py
Then make the following changes to that method:
// change
old_name = settings.DATABASE_NAME
from django.db import connection
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
connection.creation.destroy_test_db(old_name, verbosity)
// to:
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
Make sure to do all the necessary imports at the top and then in your settings file set the setting:
TEST_RUNNER = 'myproject.test.test_runner.run_tests'
Now when you run ./manage.py test Django will run the tests against the current state of your database rather than creating a new version based on your current model definitions.
Another thing you can do is create a copy of your database locally, and then do a check in your new run_test() method like this:
if settings.DATABASE_NAME != 'my_test_db':
sys.exit("You cannot run tests using the %s database. Please switch DATABASE_NAME to my_test_db in settings.py" % settings.DATABASE_NAME)
That way there's no danger of running tests against your main database.

Categories

Resources