So I've made some changes to my schema on a Flask Server, using SQLite and SQLAlchemy. The database is generally functional and I'm able to add some of the models, as well as update and query all models without issues.
I've changed the id in two of my models from Integer to String in order to implement uuid ids, and since I've received IntegrityError for the mismatching parameter types when I do db.session.add(new_post) and db.session.commit().
If I do flask db migrate, it reports that no changes have been detected. Should I manually fill out a revision file or is there something else I am missing?
In addition to migrating your database, you'll need to upgrade:
flask db upgrade
If this results in an error, you might be encountering database specific constraints. For example, some primary keys cannot be removed without setting cascade rules on deletion for relationships.
Related
I've tried many contortions on this problem to try to figure out what's going on.
My SQLAlchemy code specified tables as schema.table. I have a special connection object that connects using the specified connect string if the database is PostgreSQL or Oracle, but if the database is SQLite, it connects to a :memory: database, then attaches the SQLite file-based database using the schema name. This allows me to use schema names throughout my SQLAlchemy code without a problem.
But when I try to set up Alembic to see my database, it fails completely. What am I doing wrong?
I ran into several issues that had to be worked through before I got this working.
Initially, Alembic didn't see my database at all. If I tried to specify it in the alembic.ini file, it would load the SQLite database using the default schema, but my model code specified a schema, so that didn't work. I had to change alembic/env.py in run_migrations_online() to call my connection method from my code instead of using engine_from_config. In my case, I created a database object that had a connect() method that would return the engine and the metadata. I called that as connectable, meta = db.connect(). I would return the schema name with schema=db.schema(). I had to import the db class from my SQLAlchemy code to get access to these.
Now I was getting a migration that would build up the entire database from scratch, but I couldn't run that migration because my database already had those changes. So apparently Alembic wasn't seeing my database. Alembic also kept telling me that my database was out of date. The problem there was that the alembic table alembic_version was being written to my :memory: database, and as soon as the connection was dropped, so was the database. So to get Alembic to remember the migration, I needed that table to be created in my database. I added more code to env.py to pass the schema to context.configure using the version_table_schema=my_schema.
When I went to generate the migration again, I still got the migration that would build the database from scratch, so Alembic STILL wasn't seeing my database. After lots more Googling, I found that I needed to pass include_schemas=True to context.configure in env.py. But after I added that, I started getting tracebacks from Alembic.
Fortunately, my configuration was set up to provide both the connection and the metadata. By changing the target_metadata=target_metadata line to target_metadata=meta (my local metadata returned from the connection), I got around these tracebacks as well, and Alembic started to behave properly.
So to recap, to get Alembic working with a SQLite database attached as a schema name, I had to import the connection script I use for my Flask code. That connection script properly attaches the SQLite database, then reflects the metadata. It returns both the engine and the metadata. I return the engine to the "connectable" variable in env.py, and return the metadata to the new local variable meta. I also return the schema name to the local variable schema.
In the with connectable.connect() as connection: block, I then pass to context.configure additional arguments target_metadata=meta, version_table_schema=schema, and include_schemas=True where meta and schema are my new local variables set above.
With all of these changes, I thought I was able to work with SQLite databases attached as schemas. Unfortunately, I continued to run into problems with this, and eventually decided that I simply wouldn't work with SQLite with Alembic. Our rule now is that Alembic migrations are only for non-SQLite databases, and SQLite data has to be migrated to another database before attempting an Alembic migration of the data.
I'm documenting this so that anyone else facing this may be able to follow what I've done and possibly get Alembic working for SQLite.
I'm using the peewee ORM to manage a few Postgres databases. I've recently had a problem where the primary keys are not being automatically added when save() or execute() is called like it should be.
Here's the code that's being called:
Macro.insert(name=name, display_text=text).on_conflict(conflict_target=(Macro.name,), preserve=(Macro.display_text,), update={Macro.name: name}).execute()
Here's the error:
Command raised an exception: IntegrityError: null value in column "id" violates non-null constraint;
DETAIL: Failing row contains (null, nametexthere, displaytexthere)
The macro class has an id (AutoField [set to be primary key]), name (CharField), and display_text (CharField). I've tried using the built in PrimaryKeyField and an IntegerField set to primary key to no change.
Before, I was using Heroku with no issue. I've since migrated my apps to my Raspberry Pi and that's when this issue popped up.
This also isn't the only case where I've had this problem. I have another database with the same AutoField primary key that seems to have broken from the transition from Heroku to Pi. That one uses the save() method rather than insert()/execute(), but the failing row error still shows up.
Should also mention that other non-insert queries work fine. I can still select without issue.
The problem didn't have anything to do with Peewee, it had to do with the dump. Heroku does not dump sequences for you automatically, so I had to add them all again manually. Once those was added the connections worked fine.
I tested with Python 2.6 or 2.7 with Django 1.5.1. My database is on MySQL 5.0. I've created the settings but now I can't run "inspectdb" on the database. I get
DatabaseError: (1146, "Table 'db1.tableName' doesn't exist")
This happens on a table which has a foreign key referencing a table in another DB. So it should not be db1 there since tableName lives in db2. I saw references to this bug from 5 years ago:
https://code.djangoproject.com/ticket/7556
But the patch is outdated by now and I figured it must have been finished in a later release. Is it something wrong with my setup?
Unfortunately, Django still does not currently does not support this feature.
Cross-database relations:
Django doesn’t currently provide any support for foreign key or
many-to-many relationships spanning multiple databases. If you have
used a router to partition models to different databases, any foreign
key and many-to-many relationships defined by those models must be
internal to a single database.
However, a fix could be found on this patch here
Basically, updating the router settings self.rel.to
I'm using Sqlalchemy in a multitenant Flask application and need to create tables on the fly when a new tenant is added. I've been using Table.create to create individual tables within a new Postgres schema (along with search_path modifications) and this works quite well.
The limitation I've found is that the Table.create method blocks if there is anything pending in the current transaction. I have to commit the transaction right before the .create call or it will block. It doesn't appear to be blocked in Sqlalchemy because you can't Ctrl-C it. You have to kill the process. So, I'm assuming it's something further down in Postgres.
I've read in other answers that CREATE TABLE is transactional and can be rolled back, so I'm presuming this should be working. I've tried starting a new transaction with the current engine and using that for the table create (vs. the current Flask one) but that hasn't helped either.
Does anybody know how to get this to work without an early commit (and risking partial dangling data)?
This is Python 2.7, Postgres 9.1 and Sqlalchemy 0.8.0b2.
(Copy from comment)
Assuming sess is the session, you can do sess.execute(CreateTable(tenantX_tableY)) instead.
EDIT: CreateTable is only one of the things being done when calling table.create(). Use table.create(sess.connection()) instead.
In order to get Django to output innodb tables in MySQL, you either need to
output ALL tables as innodb or
selectively issue alter table commands
The first is sub-optimal (MyISAM is faster for query-dominant tables) and the second is a pain and hack-ish.
Are there any other ways?
UPDATE: Adding more clarity to my question -
I want my models (or tables) that Django creates initially (using syncdb) to be using INNODB engine (not MyISAM). In my database, I'll be having some tables in InnoDB & some in MyISAM. How can I do this in Django?
This page should be good starting point: http://code.djangoproject.com/wiki/AlterModelOnSyncDB
It documents a way to hook into the post_syncdb hook to dynamically issue ALTER SQL commands to change the engine for the tables. (Note that this was written 4 years ago, and may need to be updated to the current version of Django).
It should be straightforward for you to add metadata to your models, that specify which storage engine to use for each table. Then you can modify the above example to key off of that metadata.