Cannot complete Flask-Migration - python

I've setup a local Postgres DB with SQLAlchemy and cannot commit my first entry. I keep on getting this error...
ProgrammingError: (ProgrammingError) relation "user" does not exist
LINE 1: INSERT INTO "user" (name, email, facebook_id, facebook_token...
It seems like the fields aren't matching to those in the database. I'm trying to migrate using flask-migrate but, when I run $ python app.py db migrate I get this error...
raise util.CommandError("No such revision '%s'" % id_)
alembic.util.CommandError: No such revision '39408d6b248d'
It may be best to delete everything and start from scratch as it seems I have botched my database setup and / or migration but I'm not sure how to.
UPDATE: The database has started working now (I dropped and created it again). However, I'm still getting the same error trying to run migrations and it turns out the "no such revision '39408d6b248d' is referring to a migration from an unrelated project. I re-installed flask-migrate but same error.

flask-migrate will create a table named "alembic_version" in your database.
so you should drop this table and delete migrations folder in your project.
and then use $ python app.py db init again...
I think $ python app.py db migrate will work fine.

Alembic keeps the migration history in your database, this is why it still recognises there is another revision there. I keep my project on Heroku so I was able to just do heroku pg:pull ... to be able to get a new copy of my database. Prior to this you will have to drop your local db. In case you don't want to drop your local, I think dropping the table should work too. I use the PG Commander as a GUI tool to quickly browse my databases.

the first step to do is run this command
alembic current
you should get an error as mentioned above (the goal is to make sure that this command returns a valid response).
the reason why u're getting this is bc alembic is confused about your current state.. it's assuming that you should be in revision 39408d6b248d but then decides that that revision is invalid.
to investigate this, let's find out which revisions are considered valid by alembic, run this command:
alembic history --verbose
you'll get a list of all previous revisions (note: it's a good idea to attach a message beside each revision.. think about it as a good git commit message)
Rev: 594cc72f56fd (head)
Parent: 262f40e28682
Path: ***************
adjust context_id in log table so that it is a substring of the object_id
Revision ID: 594cc72f56fd
Revises: 262f40e28682
Create Date: 2015-07-22 14:31:52.424862
Rev: 262f40e28682
Parent: 1dc902bd1c2
Path: ***************
add context_id column to log table
Revision ID: 262f40e28682
Revises: 1dc902bd1c2
Create Date: 2015-07-22 11:05:37.654553
Rev: 1dc902bd1c2
Parent: <base>
Path: ***************
Initial database setup
Revision ID: 1dc902bd1c2
Revises:
Create Date: 2015-07-06 09:55:11.439330
the revision 39408d6b248d clearly doesn't exist in the above revisions. This revision is stored in the alembic_table in the database.. you can verify by going to your dbase and running:
$ select * from alembic_version;
version_num
--------------
57ac999dcaa7
so now you should review the state of your database and see where it fits vis-a-vis the revisions outputted above:
in my case, by poking around my dbase it becomes obvious which revision i'm in right now.. which is that dbase has been setup,, but the other revisions haven't been included yet.
so now i replace the value on the dbase with the one i found from the history command above:
vibereel=> update alembic_version set version_num = '1dc902bd1c2';
and now running alembic current returns
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
1dc902bd1c2
done.

It means that the entry in table alembic_version of your db is "39408d6b248d" and there's no migration file related to it in migrations folder (by default migrations/versions).
So better drop the table alembic_version from your db and do
$ python app.py db history to get the new head of migrations, say, 5301c31377f2
Now run $ python app.py db stamp 5301c31377f2 to let alembic know that it's your migration head (which gets stored in table alembic_version).

Assuming that you have checked that the database exists using psql or pgAdmin, this error usually means exactly what it says. That can be due to either:
not connecting to the correct database instance (check your db url: host/port and db name)
not correctly configuring SQLAlchemy (see: SQLAlchemy create_all() does not create tables)

I got the same error yesterday, for my case the revision number '39408d6b248d' is due to your previous migration upgrade actions, each time your ran upgrade script then an Alembic version will be generated and stored in data.sqlite.
you must have done any upgrade which generated above '39408d6b248d', but then you deleted whole migrations/ directory and removed all upgrade scripts. The database e.g. data.sqlite still stores '39408d6b248d' but there aren't according migration script exists.
for my solution I delete whole Alembic versions in database and did all upgrades from scratch.

I ran into a similar issue. After performing the python manage.py db migrate command the database tables were not created, there was only an alembic table in the database.
I found the solution in the flask-migrate documentation:
https://flask-migrate.readthedocs.org/en/latest/
The migration script needs to be reviewed and edited, as Alembic
currently does not detect every change you make to your models. In
particular, Alembic is currently unable to detect indexes. Once
finalized, the migration script also needs to be added to version
control.
Then you can apply the migration to the database:
python manage.py db upgrade
This command created the tables and applied the database migrations.

I had the same issue. According to the version in alembic_version table in db, migration action is looking for that version in /migrations/versions folder which has already been deleted. Therefore solution is to delete alembic_version table:
If you are using sqlite,
1. Open your xxx.sqlite db file.
sqlite3 xxx.sqlite
2. check tables
.tables
3. you will see alembic_version, delete it
DROP TABLE alembic_version

Related

How to VACUUM a SQLite database from within an Alembic migration?

I am using SqlAlchemy Alembic to perform DB migrations on a SQLite database. One of my migrations removes many redundant records and I would like to VACUUM the database after the deletion.
Here's how I'm trying to do this in my migration's upgrade() method:
def upgrade():
# deleting records here using op.execute()...
op.execute("VACUUM")
When the migration runs it fails with this error message:
E sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) cannot VACUUM from within a transaction
E [SQL: VACUUM]
E (Background on this error at: https://sqlalche.me/e/14/e3q8)```
The link only provides a rather general description of what an OperationalError is.
How can I overcome this error and VACUUM my database from within my migration?
Is there a way to exclude this specific command or this specific migration from running in a transaction?
PS - In general I would like my migrations to run in transactions so I would prefer not to change Alembic's default behavior (as set in env.py).
I was able to successfully execute the VACUUM command in my migration by wrapping it like so:
with op.get_context().autocommit_block():
op.execute("VACUUM")
This did not require any changes to env.py.

How do I set up Alembic for a SQLite database attached as a schema?

I've tried many contortions on this problem to try to figure out what's going on.
My SQLAlchemy code specified tables as schema.table. I have a special connection object that connects using the specified connect string if the database is PostgreSQL or Oracle, but if the database is SQLite, it connects to a :memory: database, then attaches the SQLite file-based database using the schema name. This allows me to use schema names throughout my SQLAlchemy code without a problem.
But when I try to set up Alembic to see my database, it fails completely. What am I doing wrong?
I ran into several issues that had to be worked through before I got this working.
Initially, Alembic didn't see my database at all. If I tried to specify it in the alembic.ini file, it would load the SQLite database using the default schema, but my model code specified a schema, so that didn't work. I had to change alembic/env.py in run_migrations_online() to call my connection method from my code instead of using engine_from_config. In my case, I created a database object that had a connect() method that would return the engine and the metadata. I called that as connectable, meta = db.connect(). I would return the schema name with schema=db.schema(). I had to import the db class from my SQLAlchemy code to get access to these.
Now I was getting a migration that would build up the entire database from scratch, but I couldn't run that migration because my database already had those changes. So apparently Alembic wasn't seeing my database. Alembic also kept telling me that my database was out of date. The problem there was that the alembic table alembic_version was being written to my :memory: database, and as soon as the connection was dropped, so was the database. So to get Alembic to remember the migration, I needed that table to be created in my database. I added more code to env.py to pass the schema to context.configure using the version_table_schema=my_schema.
When I went to generate the migration again, I still got the migration that would build the database from scratch, so Alembic STILL wasn't seeing my database. After lots more Googling, I found that I needed to pass include_schemas=True to context.configure in env.py. But after I added that, I started getting tracebacks from Alembic.
Fortunately, my configuration was set up to provide both the connection and the metadata. By changing the target_metadata=target_metadata line to target_metadata=meta (my local metadata returned from the connection), I got around these tracebacks as well, and Alembic started to behave properly.
So to recap, to get Alembic working with a SQLite database attached as a schema name, I had to import the connection script I use for my Flask code. That connection script properly attaches the SQLite database, then reflects the metadata. It returns both the engine and the metadata. I return the engine to the "connectable" variable in env.py, and return the metadata to the new local variable meta. I also return the schema name to the local variable schema.
In the with connectable.connect() as connection: block, I then pass to context.configure additional arguments target_metadata=meta, version_table_schema=schema, and include_schemas=True where meta and schema are my new local variables set above.
With all of these changes, I thought I was able to work with SQLite databases attached as schemas. Unfortunately, I continued to run into problems with this, and eventually decided that I simply wouldn't work with SQLite with Alembic. Our rule now is that Alembic migrations are only for non-SQLite databases, and SQLite data has to be migrated to another database before attempting an Alembic migration of the data.
I'm documenting this so that anyone else facing this may be able to follow what I've done and possibly get Alembic working for SQLite.

OperationalError: cursor "_django_curs_<id>" does not exist

We have an online store web-app which is powered by django, postgresql and heroku.
For a specific campaign (you can think a campaign like a product to purchase), we have sold 10k+ copies successfully. Yet some of our users are encountered this error according to our Sentry reports. Common specification of these users is; none of them have address information before the purchase. Generally, users fill out address form right after registering. If they don't, they need to fill the form while purchasing the product and submit them together.
This is how the trace looks like:
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
(66 additional frame(s) were not displayed)
...
File "store/apps/store_main/templatetags/store_form_filters.py", line 31, in render_form
return render_to_string('widgets/store_form_renderer.html', ctx)
File "store/apps/store_main/templatetags/store_form_filters.py", line 20, in render_widget
return render_to_string('widgets/store_widget_renderer.html', ctx)
File "store/apps/store_main/widgets.py", line 40, in render
attrs=attrs) + "<span class='js-select-support select-arrow'></span><div class='js-select-support select-arrow-space'><b></b></div>"
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
So another weird common thing, there are exception messages between sql queries before the failure. You can see it in the image below:
I'm adding it if they are somehow related. What may also be related is, the users who get this error are the users who tries to purchase the campaign right after a bulk mailing. So, extensive traffic might be the reason yet we are also not sure.
We asked Heroku about the problem since they are hosting the postgres, yet they do not have any clue either.
I know the formal reason of this error is trying to reach the cursor after a commit. Since it is destroyed after transaction, trying to reach it cause this error yet I don't see this in our scenario. We are not touching the cursor in any way. What am I missing? What may produce this error? How to prevent it? Any ideas would be appreciated.
The reason for your error might be that you added fields to a model and forgot to makemigrations and migrate.
Have a look at this answer: When in run test cases then I will get this error: psycopg2.OperationalError: cursor "_django_curs_140351416325888_23" does not exist
If you're using django-pytest and have enabled the optimization --reuse-db and have made DB migrations between test runs you need to re-create the DB tables again.
pytest --create-db
Most likely you have forgotten to makemigrations and migrate them to the database.
If you are sure you did make the migrations and running python manage.py makemigrations and python manage.py migrate will not find the changes you made then the database and models are not in sync.
Sometimes this situation can be very frustrating if you have a big database. You will need to manually inspect your models.
To help out you can try this trick, which has been working for me.
Step 1 (delete all migrations from apps)
I am going to assume you are using the unix terminal. run sudo rm -rv */migrations/*. This removes all the migrations files and caches.
Step 2 (make the migration folders in each app)
run the command mkdir <app-folder>/migrations && touch <app-folder>/__init__.py. Replace with the name of the app that you have in the INSTALLED_APPS list in your default django settings file.
Step 3 (Make Migrations)
Here we populate the migrations folders in each app with migration files. Run python manage.py makemigrations. Run the second command. python manage.py migrate --fake. We are using the --fake flag because the data already exists in the database and so we do not want to populate name database tables that are already there or you will be greeted with the already exists error
If this did not work, then you will need to temper with some additional fields on the database. such as the django_migrations or a similarly named table. This is not recommended as there are other database tables that depend on it such as the django_contenttypes and this will throw you in a chain of related tables that you will manually inspect which is very painful.

Alembic migration acting on different versions of same database

I am trying to use alembic migrations to act on different versions of the same database. An example would be that I have two databases, one live and one for testing. Each of them might be in different states of migration. For one, the test database might not exist at all.
Say live has a table table1 with columns A and B. Now I would like to add column C. I change my model to include C and I generate a migration script that has the following code
op.add_column('table1', sa.Column('C', sa.String(), nullable=True))
This works fine with the existing live database.
If I now call alembic upgrade head referring to a non-existing test database, I get an (Operational Error) duplicate column name... error. I assume this is due to my model containing the C column and that alembic/sqlalchemy creates the full table automatically if it does not exist.
Should I simply trap the error or is there a better way of doing this?
I would suggest that immediately after your test db is newly created you should stamp it with head
command.stamp(configs_for_test_db, 'head')
This will go ahead and insert the head revision number into the appropriate alembic table without actually running migrations so that the revision number will reflect the state of the db (namely that your newly created db is up to date wrt migrations). After the db is stamped, alembic upgrade should behave properly.

postgres : relation there even after dropping the database

I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ?
Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 / postgres database and recreate it or connect to it and drop all those objects by hand.
Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.
Try to drop the database again and see what it says. If that appears to work then go in to postgres and type \l, putting the output here.

Categories

Resources