I have multiple schemas in my database, and several models per schema. Flask-migrate (which is Alembic) is unable to detect changes in any schema besides the public schema. Running
flask db migrate
followed by
flask db upgrade
will yield an error every time because the tables are already created. How can I configure alembic to recognize other schemas besides the public schema?
Modify your env.py file created by Alembic so that the context.configure function is called using the include_schemas=True option. Ensure that this is done in both your offline and online functions.
Here are my modified run_migrations_offline and run_migrations_online functions.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=get_metadata(), literal_binds=True, include_schemas=True
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# this callback is used to prevent an auto-migration from being generated
# when there are no changes to the schema
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
def process_revision_directives(context, revision, directives):
if getattr(config.cmd_opts, 'autogenerate', False):
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
logger.info('No changes in schema detected.')
connectable = get_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=get_metadata(),
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args,
include_schemas=True
)
with context.begin_transaction():
context.run_migrations()
Related
I have FastAPI Python application with routes that operate on a MongoDB instance.
The connection works fine, and I can query documents for my GET endpoints, but creating a new document from within FastAPI seems impossible.
I consistently get:
You have not defined a default connection
I have a standalone script that handles some data migration tasks and it uses the exact same DB class and Document models that the FastAPI app does, and that script is able to save documents to mongo perfectly fine. There is no difference in how the DB object is instantiated between the API and the script.
The DB class:
from os import getenv
from mongoengine import connect
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError
class Mongo:
#property
def target_db(self):
return 'some_db'
#property
def uri(self) -> str:
env_uri = getenv('MONGODB', None)
if env_uri is None:
raise DBError('MONGODB environment variable missing')
return env_uri.strip()
def connect(self) -> MongoClient:
try:
return connect(host=self.uri, db=self.target_db, alias=self.target_db)
except ServerSelectionTimeoutError as e:
raise ServerSelectionTimeoutError(e)
All of my DB models have meta attributes defining exactly what DB and collection to use:
class Thing(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Queries on existing documents succeed inside of a route definition:
results = Thing.objects.filter(**query)
# This returns things that I can iterate over
Document creation fails inside of a route definition:
new_thing = Thing(**creation_args)
new_thing.save()
Error:
mongoengine.connection.ConnectionFailure: You have not defined a default connection
What does that even mean? I know that I'm connected because I can query the db.
How is it possible that I can successfully query documents from Mongo but not save them?
Every suggestion I have seen online points to not having defined a db or alias in the call to mongoengine.connect, but I clearly am in my Mongo object, and even if that were true, surely I wouldn't be able to retrieve documents from the db collection...
The mongoengine Document models had malformatted meta attributes...
class Accounts(Document):
meta = {'db_alias': 'some_db',
'collection': 'things'}
Solution: db_alias needed to be changed to db.
I didn't find this through documentation, and definitely not through the extremely unhelpful error messages. I just tried it on a whim. Now everything works using the FastAPI framework.
I have a flask application and trying to make it multi-tenant using multiple schemas in a single database.
When an alteration needed to the database like adding a column, adding a table, and other alterations. I need to migrate through each to schemas. I changed my migrations/env.py like below
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
# schemas = set([prototype_schema,None])
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata,
include_schemas=True, #schemas,
# include_object=include_schemas([None,prototype_schema])
include_object=include_schemas([None])
)
try:
domains = ['public', 'test', 'some_schema_name']
for domain in domains:
connection.execute('set search_path to "{}", public'.format(domain))
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
The migrations are only affecting the first schema in the array. Here the public only gets migrated. I need to migrate across all schemas.
Problem
I'm using Alembic autogenerate to migrate some model changes. I run alembic revision/upgrade once and it properly creates my table and adds an alembic_version table to my database. When I go to run the revision/upgrade command again, it tries to recreate the table despite no changes being made to the model
alembic.command.revision(cfg, autogenerate=True)
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'alias.alias'
As you can see here it's attempting to add the table alias.alias even though it already exists in my database and was created by Alembic in the first revision/upgrade command.
Predictably, when I attempt to run the second upgrade I get the error
psycopg2.errors.DuplicateTable: relation "alias" already exists
Current setup
env.py
import sys
import os
sys.path.insert(0, '/tmp/')
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from models.Base import Base
from models import Alias
config = context.config
fileConfig(config.config_file_name)
target_metadata = Base.metadata
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
Alias.py
from sqlalchemy import Column, Integer, String
from models.Base import Base
class Alias(Base):
__tablename__ = 'alias'
__table_args__ = {'schema': 'alias'}
id = Column(Integer, primary_key=True)
chart_config = Column(String)
name = Column(String, nullable=False, unique=True)
display_name = Column(String)
datasets = Column(String)
Base.py
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Expected outcome
How do I get alembic to detect that the alias.alias table already exists? It should autogenerate an empty revision. The model Alias.py is completely static during my 2 runs of revision/upgrade
Solved by editing alembic's env.py to include schemas
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata,
include_schemas = True # Include this
)
with context.begin_transaction():
context.run_migrations()
I'd like to connect to a second external database during my migration to move some of its data into my local database. What's the best way to do this?
Once the second db has been added to the alembic context (which I am not sure of how to do), how can run SQL statements on the db during my migration?
This is what my env.py looks like right now:
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
from migration_settings import database_url
import models
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
target_metadata = models.Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = database_url or config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata, literal_binds=True, version_table_schema='my_schema')
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
config_overrides = {'url': database_url} if database_url is not None else {}
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool, **config_overrides)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
version_table_schema='my_schema'
)
connection.execute('CREATE SCHEMA IF NOT EXISTS my_schema')
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
In our project we have multiple databases and we use alembic for migration.
I know that alembic is supposed to be used only for database structure migration, but we also use it for data migration as it's convenient to have all database migration code in one place.
My problem is that alembic works on one database at a time. So if I have databases DB1 and DB2, alembic will first run all migrations for DB1 and after that all migrations for DB2.
The problems start when we migrate data between databases. Say, if in I'm in revision N of DB1 try to access data in DB2, the migration can fail because DB2 can be on revision zero or N-X.
Question: is it possible to run alembic migrations one by one for all databases instead of running all migrations for DB1 and then running all for DB2?
My current env.py migration function:
def run_migrations_online():
"""
for the direct-to-DB use case, start a transaction on all
engines, then run all migrations, then commit all transactions.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
cfg = context.config.get_section(name)
if not 'sqlalchemy.url' in cfg:
cfg['sqlalchemy.url'] = build_url(name)
rec['engine'] = engine_from_config(
cfg,
prefix='sqlalchemy.',
poolclass=pool.NullPool)
for name, rec in engines.items():
engine = rec['engine']
rec['connection'] = conn = engine.connect()
rec['transaction'] = conn.begin()
try:
for name, rec in engines.items():
logger.info("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name))
context.run_migrations(engine_name=name)
for rec in engines.values():
rec['transaction'].commit()
except:
for rec in engines.values():
rec['transaction'].rollback()
raise
finally:
for rec in engines.values():
rec['connection'].close()
While I haven't tested this myself, I have been reading https://alembic.sqlalchemy.org/en/latest/api/script.html
It seems feasible that you could use ScriptDirectory to iterate through all the revisions, check if each db needs to apply that revision, and then rather than context.run_migrations you could manually call command.upgrade(config, revision) to apply that one revision.