I want to remove null=True from a TextField:
- footer=models.TextField(null=True, blank=True)
+ footer=models.TextField(blank=True, default='')
I created a schema migration:
manage.py schemamigration fooapp --auto
Since some footer columns contain NULL I get this error if I run the migration:
django.db.utils.IntegrityError: column "footer" contains null values
I added this to the schema migration:
for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
sender.footer=''
sender.save()
Now I get:
django.db.utils.DatabaseError: cannot ALTER TABLE "fooapp_emailsender" because it has pending trigger events
What is wrong?
Another reason for this maybe because you try to set a column to NOT NULL when it actually already has NULL values.
Every migration is inside a transaction. In PostgreSQL you must not update the table and then alter the table schema in one transaction.
You need to split the data migration and the schema migration. First create the data migration with this code:
for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
sender.footer=''
sender.save()
Then create the schema migration:
manage.py schemamigration fooapp --auto
Now you have two transactions and the migration in two steps should work.
At the operations I put SET CONSTRAINTS:
operations = [
migrations.RunSQL('SET CONSTRAINTS ALL IMMEDIATE;'),
migrations.RunPython(migration_func),
migrations.RunSQL('SET CONSTRAINTS ALL DEFERRED;'),
]
If you are adding a non-nullable field, you need to do it in two migrations:
AddField and RunPython to populate it
AlterField to change the field to be non-nullable
Explanation
On PostgreSQL and SQLite, this problem can occur if you have a sufficiently complex RunPython command combined with schema alterations in the same migration. For example, if you are adding a non-nullable field, the typical migration steps for this is:
AddField to add the field as nullable
RunRython to populate it
AlterField to change the field to be non-nullable
On SQLite and Postgres, this can cause problems because the whole thing is being done in one transaction.
The Django docs have a specific warning about this:
On databases that do support DDL transactions (SQLite and PostgreSQL), RunPython operations do not have any transactions automatically added besides the transactions created for each migration. Thus, on PostgreSQL, for example, you should avoid combining schema changes and RunPython operations in the same migration or you may hit errors like OperationalError: cannot ALTER TABLE "mytable" because it has pending trigger events.
If this is the case, the solution is to separate your migration into multiple migrations. In general, the way to split is to have a first migration containing the steps up through the run_python command and the second migration containing all the ones after it. Thus, in the case described above, the pattern would be the AddField and RunPython in one migration, and the AlterField in a second.
Have just hit this problem. You can also use db.start_transaction() and db.commit_transaction() in the schema migration to separate data changes from schema changes. Probably not so clean as to have a separate data migration but in my case I would need schema, data, and then another schema migration so I decided to do it all at once.
You are altering the column schema. That footer column can no longer contain a blank value. There are most likely blank values already stored in the DB for that column. Django is going to update those blank rows in your DB from blank to the now default value with the migrate command. Django tries to update the rows where footer column has a blank value and change the schema at the same time it seems (I'm not sure).
The problem is you can't alter the same column schema you are trying to update the values for at the same time.
One solution would be to delete the migrations file updating the schema. Then, run a script to update all those values to your default value. Then re-run the migration to update the schema. This way, the update is already done. Django migration is only altering the schema.
In my case I've got
AddField
RunPython
RemoveField
Then I just moved the last RemoveFied to the new migration file, that fixed the problem
step 1)the solution is to remove the latest migration from the migration folder and remove the latest added fields in models.
step 2)then again makemigration and migrate
step 3)At the last add the field again that has been removed in the first step
step 4)then again makemigration and migrate
Problem solved
Related
I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration. Is there any way to specify the ordering or am I just obsessed with the order but I shouldn't be?
Update after the discussion
PostgreSQL DBMS doesn't support positional column addition. So it is practically meaningless to expect this facility from the migration tool for column addition.
AFAIK, there's no officially supported way to do this, because fields are supposed to be atomic and it shouldn't be relevant. However, it messes with my obsessive-compulsive side as well, and I like my columns to be ordered for when I need to debug things in dbshell, for example. Here's what I've found you can do:
Make a migration with python manage.py makemigrations
Edit the migration file and reorder the fields in migrations.createModel
I am not 100% sure about the PostgreSQL syntax but this is what it looks like in SQL after you have created the database. I'm sure PostgreSQL would have an equivalent:
ALTER TABLE yourtable.yourmodel
CHANGE COLUMN columntochange columntochange INT(11) NOT NULL AFTER columntoplaceunder;
Or if you have a GUI (mysql workbench in my case) you can go to the table settings and simply drag and drop colums as you wish and click APPLY.
I am trying to use alembic migrations to act on different versions of the same database. An example would be that I have two databases, one live and one for testing. Each of them might be in different states of migration. For one, the test database might not exist at all.
Say live has a table table1 with columns A and B. Now I would like to add column C. I change my model to include C and I generate a migration script that has the following code
op.add_column('table1', sa.Column('C', sa.String(), nullable=True))
This works fine with the existing live database.
If I now call alembic upgrade head referring to a non-existing test database, I get an (Operational Error) duplicate column name... error. I assume this is due to my model containing the C column and that alembic/sqlalchemy creates the full table automatically if it does not exist.
Should I simply trap the error or is there a better way of doing this?
I would suggest that immediately after your test db is newly created you should stamp it with head
command.stamp(configs_for_test_db, 'head')
This will go ahead and insert the head revision number into the appropriate alembic table without actually running migrations so that the revision number will reflect the state of the db (namely that your newly created db is up to date wrt migrations). After the db is stamped, alembic upgrade should behave properly.
With this setup:
-Development environment
-Flask
-SQLAlchemy
-Postgres
-Possibility Alembic
If I have a database with some tables populated with random data. As far as I know the Flask-Migrate, that will use Alembic, will not preserve the data, only keep the models and database synchronized.
But what is the difference between the use of Alembic or just delete > create all tables?
Something like:
db.create_all()
The second question:
What happens to the data when something change in models? The data will be lost? Or the Alembic can preserve the previous populated data?
Well, my idea is just populate the database with some data, and then avoid any lost of data
when the models change. Alembic is the solution?
Or I need to import the data, from a .sql file, for example, when I change the models and database?
I am the Flask-Migrate author.
You are not correct. Flask-Migrate (through Alembic) will always preserve the data that you have in your database. That is the whole point of working with database migrations, you do not want to lose your data.
It seems you already have a database with data in it and you want to start using migrations. You have two options to incorporate Flask-Migrate into your project:
Only track migrations going forward, i.e. leave your initial database schema outside of migration tracking.
For this you really have nothing special to do. Just do manage.py db init to create the migrations repository and when you need to migrate your database do so normally with manage.py db migrate. The disadvantage of this method is that Flask-Migrate/Alembic do not have the initial schema of the database, so it is not possible to recreate a database from scratch.
Implement an initial migration that brings your database to your current state, then continue tracking future migrations normally.
This requires a little bit of trick. Here you want Alembic to record an initial migration that defines your current schema. Since Alembic creates migrations by comparing your models to your database, the trick is to replace your real database with an empty database and then generate a migration. After the initial migration is recorded, you restore your database, and from then on you can continue migrating your database normally.
I hope this helps. Let me know if you have any more questions.
I added orgname = models.CharField(max_length=50) to my an existing class in my models.py and I ran python manage.py syncdb but figured out that it doesn't create columns (I'm using PostgreSQL by the way), so I needed to do python manage.py sqlall <myapp> which I did and it outputted the following:
BEGIN;
CREATE TABLE "file_uploader_files" (
"id" serial NOT NULL PRIMARY KEY,
"file" varchar(100) NOT NULL,
"orgname" varchar(50) NOT NULL
)
;
COMMIT;
Yet, when I go into the shell for Django or look in pgAdmin3, the column is still not created. What am I doing wrong? I'd add it manually but I'm not sure how.
P.S. The table was already created before hand and so was the file varchar, orgname came after I initial made that column.
The documentation for the sqlall command says:
Prints the CREATE TABLE and initial-data SQL statements for the given app name(s).
It prints the SQL, it doesn't run anything. Django will never modify your schema, you'll need to do it yourself - the output above can help by showing you the type of the orgname field. Or use something like South.
Also see this SO question: update django database to reflect changes in existing models (the top two answers cover your question).
From the accepted answer:
note: syncdb can't update your existing tables. Sometimes it's impossible to decide what to do automagicly - that's why south scripts are this great.
And in the other another answer...
python manage.py reset <your_app>
This will update the database tables for your app, but will completely destroy any data that existed in those tables.
One of my Django models is a subclass and I want to change its superclass to one that is very similar to the original one. In particular, the new superclass describes the same object and has the same primary key. How can I make South create the new OneToOne field and copy the values from the old one to the new one?
In south, there are two kinds of migrations: schema migrations and data migrations.
After you've created the schemamigration, create a corresponding data migration:
./manage.py datamigration <app> <migration_name>
Do not run the migration (yet). Instead, open up the migration file you just created.
You'll find the method named forwards(). Into this you define the procedure by which values from old tables get copied to new tables.
If you're changing the structure of a given table to a more complex layout, a common method is to have two schema migrations around a data migration: the first schema migration adds fields, the data migration translates the old fields to the new fields, and the second schema migration deletes the old fields. You can do just about anything with the database with the forwards() method, so long as you keep track of which schema (previous or current) you're accessing. Generally, you only read from the orm.-related, and write to the traditional Django accessors.
The South Data Migration Tutorial covers this in some detail. It shows you how to use South's orm reference to access the database using the schema prior to the schema migration and gives access to the database without Django complaining about fields it doesn't understand.
If you're renaming a class, that can be tricky-- it involves creating the new table, migrating from one to the other, and deleting the old table. South can do it, but it might take more than one pass through shifting schemas and data migrations.
South also has the backwards() method, which allows you to return your database tables to a previous step. In some cases, this may be impossible; the new table may record information that will be lost in a downgrade. I recommend using throwing an exception in backwards() if you're not in DEBUG mode.