QUESTION :
How to exclude the logs migrations from the default database, when using multiple databases in Django.
I want this to be automated. I started overriding the migrate command
I am using the default database for all models in my application and I need new database Logs, for only one model (the model is in different app - logs)
I successfully connected the application with the both databases. Also I am using a Router to control the operations
class LogRouter:
route_app_labels = {'logs'}
def db_for_read(self, model, **hints):
...
def db_for_write(self, model, **hints):
...
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
Make sure the logs app only appear in the
'logs' database.
"""
if app_label in self.route_app_labels:
return db == 'logs'
if db != 'default':
"""
If the database is not default, do not apply the migrations to the other
database.
"""
return False
return None
With allow_migrate I am faking the logs migrations in the default database which is updating the table django_migrations with the logs migration.
Also with
if db != 'default':
"""
If the database is not default, do not apply the migrations to the other database.
"""
return False
I am faking the migrations from the default database in the logs database and again the django_migrations table is updated with all the default database migrations.
This is fine solution, but I want to achieve:
The logs migrations to be ignored in the default database, including django_migrations table
The migrations for the default database to be ignored from the logs database, including django_migrations table
To achieve this, I tried overriding the migrate command:
from django.core.management.commands import migrate
class Command(migrate.Command):
def handle(self, *args, **options):
super(Command, self).handle(*args, **options)
# this is equal to python manage.py migrate logs --database=logs
# This will execute only the logs migrations in the logs database
options['app_label'] = options['database'] ='logs'
super(Command, self).handle(*args, **options)
With this code I am fixing the logs database, but the default still tries to execute the logs migrations (it is writing them down in the django_migrations table)
Related
I have multiple schemas in my database, and several models per schema. Flask-migrate (which is Alembic) is unable to detect changes in any schema besides the public schema. Running
flask db migrate
followed by
flask db upgrade
will yield an error every time because the tables are already created. How can I configure alembic to recognize other schemas besides the public schema?
Modify your env.py file created by Alembic so that the context.configure function is called using the include_schemas=True option. Ensure that this is done in both your offline and online functions.
Here are my modified run_migrations_offline and run_migrations_online functions.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=get_metadata(), literal_binds=True, include_schemas=True
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# this callback is used to prevent an auto-migration from being generated
# when there are no changes to the schema
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
def process_revision_directives(context, revision, directives):
if getattr(config.cmd_opts, 'autogenerate', False):
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
logger.info('No changes in schema detected.')
connectable = get_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=get_metadata(),
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args,
include_schemas=True
)
with context.begin_transaction():
context.run_migrations()
I have a flask application and trying to make it multi-tenant using multiple schemas in a single database.
When an alteration needed to the database like adding a column, adding a table, and other alterations. I need to migrate through each to schemas. I changed my migrations/env.py like below
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
# schemas = set([prototype_schema,None])
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata,
include_schemas=True, #schemas,
# include_object=include_schemas([None,prototype_schema])
include_object=include_schemas([None])
)
try:
domains = ['public', 'test', 'some_schema_name']
for domain in domains:
connection.execute('set search_path to "{}", public'.format(domain))
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
The migrations are only affecting the first schema in the array. Here the public only gets migrated. I need to migrate across all schemas.
I have two Databases defined, default which is a regular MySQL backend andredshift (using a postgres backend). I would like to use RedShift as a read-only database that is just used for django-sql-explorer.
Here is the router I have created in my_project/common/routers.py:
class CustomRouter(object):
def db_for_read(self, model, **hints):
return 'default'
def db_for_write(self, model, **hints):
return 'default'
def allow_relation(self, obj1, obj2, **hints):
db_list = ('default', )
if obj1._state.db in db_list and obj2._state.db in db_list:
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
return db == 'default'
And my settings.py references it like so:
DATABASE_ROUTERS = ['my_project.common.routers.CustomRouter', ]
The problem occurs when invoking makemigrations, Django throws an error with the indication that it is trying to create django_* tables in RedShift (and obviously failing because of the postgres type serial not being supported by RedShift:
...
raise MigrationSchemaMissing("Unable to create the django_migrations table (%s)" % exc)
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (Column "django_migrations.id" has unsupported type "serial".)
So my question is two-fold:
Is it possible to completely disable Django Management for a database, but still use the ORM?
Barring Read-Only Replicas, why has Django not considered it an acceptable use case to support read-only databases?
Related Questions
- Column 'django_migrations.id' has unsupported type 'serial' [ with Amazon Redshift]
I just discovered that this is the result of a bug. It's been addressed in a few PRs, most notably: https://github.com/django/django/pull/7194
So, to answer my own questions:
No. It's not currently possible. The best solution is to use a custom Database Router in combination with a read-only DB account and have allow_migrate() return False in the router.
The best solution is to upgrade to Django >= 1.10.4 and not use a Custom Database Router, which stops the bug. However, this is a caveat if you have any other databases defined, such as a Read-Replica.
In our project we have multiple databases and we use alembic for migration.
I know that alembic is supposed to be used only for database structure migration, but we also use it for data migration as it's convenient to have all database migration code in one place.
My problem is that alembic works on one database at a time. So if I have databases DB1 and DB2, alembic will first run all migrations for DB1 and after that all migrations for DB2.
The problems start when we migrate data between databases. Say, if in I'm in revision N of DB1 try to access data in DB2, the migration can fail because DB2 can be on revision zero or N-X.
Question: is it possible to run alembic migrations one by one for all databases instead of running all migrations for DB1 and then running all for DB2?
My current env.py migration function:
def run_migrations_online():
"""
for the direct-to-DB use case, start a transaction on all
engines, then run all migrations, then commit all transactions.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
cfg = context.config.get_section(name)
if not 'sqlalchemy.url' in cfg:
cfg['sqlalchemy.url'] = build_url(name)
rec['engine'] = engine_from_config(
cfg,
prefix='sqlalchemy.',
poolclass=pool.NullPool)
for name, rec in engines.items():
engine = rec['engine']
rec['connection'] = conn = engine.connect()
rec['transaction'] = conn.begin()
try:
for name, rec in engines.items():
logger.info("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name))
context.run_migrations(engine_name=name)
for rec in engines.values():
rec['transaction'].commit()
except:
for rec in engines.values():
rec['transaction'].rollback()
raise
finally:
for rec in engines.values():
rec['connection'].close()
While I haven't tested this myself, I have been reading https://alembic.sqlalchemy.org/en/latest/api/script.html
It seems feasible that you could use ScriptDirectory to iterate through all the revisions, check if each db needs to apply that revision, and then rather than context.run_migrations you could manually call command.upgrade(config, revision) to apply that one revision.
I have two django apps, call them Main on server A and Tasker on server B.
Main responds to user requests and does a lot of things that can be quickly done.
On the other hand, Tasker only has a few models for logging and celery tasks.
On server A, 'tasker' is not included in INSTALLED_APPS as I don't need it there, whereas on server B, it is.
From django's documentation, i create a router and defined db_for_read and db_for_write
class ModelsRouter(object):
"""
Logging models are on local,
but updated models are on another server
"""
def db_for_read(self, model, **hints):
if model._meta.app_label == 'tasker':
return 'tasker'
return None
def db_for_write(self, model, **hints):
if model._meta.app_label == 'tasker':
return 'tasker'
return None
On server B, DATABASES setting contains two keys:
default pointing to server A
tasker pointing to localhost
The problem I have is that when i run manage.py migrate, the models of tasker are created on server A.
How can I set the project on server B to be aware of the following:
- models of main app are on server A
- models of tasker are on server B (also localhost) ?
I managed to solve the problem the following way:
I modified ModelsRouter to use main database configuration if models are NOT from app tasker
on the server where I deployed tasker, I modified DATABASES so as default points to localhost and main points to the other server where main resides
On server B, I ran manage.py migrate tasker as the other models are not needed in that database.
It's working now:
logging is done in tables on server B
updates are performed on the other server
The problem I ran into when running manage.py migrate tasker is this:
RuntimeError: Error creating new content types. Please make sure contenttypes is migrated before trying to migrate apps individually.
but i'll manage it.