migrations are getting created repeatedly - python

I have created some models and when I run python manage.py db migrate command it creates migrations file, so that is fine.
python manage.py db upgrade command also creates table in Database.
If I again run the python manage.py db migrate command then it is creating migrations file for those models that I have upgraded recently.
Can you please help me to resolve it.

I had same problem and i've resolved it.
In my case, There is a problem on getting current table names.
(when calling get_table_names function in _autogen_for_tables((alembic/autogenerate/compare.py))
I am using sqlalchemy with the mysql-connector.
mysql-connector return table information as bytearray.
so i've changed temporally the following. (base.py(sqlalchemy/dialects/mysql))
#reflection.cache
def get_table_names(self, connection, schema=None, **kw):
"""Return a Unicode SHOW TABLES from a given schema."""
if schema is not None:
current_schema = schema
else:
current_schema = self.default_schema_name
charset = self._connection_charset
if self.server_version_info < (5, 0, 2):
rp = connection.execute(
"SHOW TABLES FROM %s"
% self.identifier_preparer.quote_identifier(current_schema)
)
return [
row[0] for row in self._compat_fetchall(rp, charset=charset)
]
else:
rp = connection.execute(
"SHOW FULL TABLES FROM %s"
% self.identifier_preparer.quote_identifier(current_schema)
)
return [
row[0]
for row in self._compat_fetchall(rp, charset=charset)
if row[1] == "BASE TABLE"
]
to
#reflection.cache
def get_table_names(self, connection, schema=None, **kw):
"""Return a Unicode SHOW TABLES from a given schema."""
if schema is not None:
current_schema = schema
else:
current_schema = self.default_schema_name
charset = self._connection_charset
if self.server_version_info < (5, 0, 2):
rp = connection.execute(
"SHOW TABLES FROM %s"
% self.identifier_preparer.quote_identifier(current_schema)
)
return [
row[0] for row in self._compat_fetchall(rp, charset=charset)
]
else:
rp = connection.execute(
"SHOW FULL TABLES FROM %s"
% self.identifier_preparer.quote_identifier(current_schema)
)
return [
row[0].decode("utf-8")
for row in self._compat_fetchall(rp, charset=charset)
if row[1].decode("utf-8") == "BASE TABLE"
]

I think the problem is to manage.py. If you did it as described on flask-migration site and stored all your models in this file - flask-migration just get these models and generates migrations and will do it always. You wrapped the standard command in your file and this is the problem.
If you want to fix it - store models in another directory (or another file), add them to an app and use command flask db migrate. In this case, flask-migration will generate migration for models only at first time, for others, it will detect changes and generate migrations only for changes.
But be careful, flask-migration don't see all changes. From site:
The migration script needs to be reviewed and edited, as Alembic currently does not detect every change you make to your models. In particular, Alembic is currently unable to detect table name changes, column name changes, or anonymously named constraints. A detailed summary of limitations can be found in the Alembic autogenerate documentation.

Related

Error after Altering (Dropping columns and Change many-to-many membership) the SQLite in FLASK migrate

I had a User and channel class in my model.py. They had many-to-many relationships. I decided to remove the many-to-many relationships between User and channel and add the many-to-many relationship between Usertemp class and channel class.
After this, I issue:
flask db migrate
flask db upgrade
However I received the error message that SQLite cannot perform this operation consider batch migration. I looked at other familiar topics such as https://github.com/miguelgrinberg/Flask-Migrate/issues/61#issuecomment-208131722
or Why Flask-migrate cannot upgrade when drop column
I have come across this code in both of the above links:
with app.app_context():
if db.engine.url.drivername == 'sqlite':
migrate.init_app(app, db, render_as_batch=True)
else:
migrate.init_app(app, db)
However maybe there is more do to since I still receive the SQLite error.
More Info: After looking at my version files in migration folder I had none as the name of some tables:
This is the first version of my migration file that I use batch migration to drop the column I guess the None is creating the problems:
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('enroll', sa.Column('usertemp_id', sa.Integer(), nullable=True))
op.drop_constraint(None, 'enroll', type_='foreignkey')
op.create_foreign_key(None, 'enroll', 'usertemp', ['usertemp_id'], ['id'])
with op.batch_alter_table('enroll') as batch_op:
batch_op.drop_column('user_id')
Then I added the following to my code:
convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
metadata = MetaData(naming_convention=convention)
db = SQLAlchemy(app, metadata=metadata)
And the run the flask db upgrade but again I receive error:
Following is my updated migration version:
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'enroll', type_='foreignkey')
op.create_foreign_key(op.f('fk_enroll_usertemp_id_usertemp'), 'enroll', 'usertemp', ['usertemp_id'], ['id'])
op.drop_column('enroll', 'user_id')
op.create_unique_constraint(op.f('uq_user_name'), 'user', ['name'])
I suspect the solution given for render_as_batch=true can fix the issue since some people have posted it is working for them. However can anyone explain how to add it to my files, where I should add it?
I suspect it should be added env.py file. However then what else I should change after?
Is adding the above code including render_as_batch fix the problem by itself.
I would appreciate it if anyone can help since right now anytime I make a mistake I issuedb.drop_all() and create the tables again, which is very inefficient.

Run alembic migrations one by one for multiple databases

In our project we have multiple databases and we use alembic for migration.
I know that alembic is supposed to be used only for database structure migration, but we also use it for data migration as it's convenient to have all database migration code in one place.
My problem is that alembic works on one database at a time. So if I have databases DB1 and DB2, alembic will first run all migrations for DB1 and after that all migrations for DB2.
The problems start when we migrate data between databases. Say, if in I'm in revision N of DB1 try to access data in DB2, the migration can fail because DB2 can be on revision zero or N-X.
Question: is it possible to run alembic migrations one by one for all databases instead of running all migrations for DB1 and then running all for DB2?
My current env.py migration function:
def run_migrations_online():
"""
for the direct-to-DB use case, start a transaction on all
engines, then run all migrations, then commit all transactions.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
cfg = context.config.get_section(name)
if not 'sqlalchemy.url' in cfg:
cfg['sqlalchemy.url'] = build_url(name)
rec['engine'] = engine_from_config(
cfg,
prefix='sqlalchemy.',
poolclass=pool.NullPool)
for name, rec in engines.items():
engine = rec['engine']
rec['connection'] = conn = engine.connect()
rec['transaction'] = conn.begin()
try:
for name, rec in engines.items():
logger.info("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name))
context.run_migrations(engine_name=name)
for rec in engines.values():
rec['transaction'].commit()
except:
for rec in engines.values():
rec['transaction'].rollback()
raise
finally:
for rec in engines.values():
rec['connection'].close()
While I haven't tested this myself, I have been reading https://alembic.sqlalchemy.org/en/latest/api/script.html
It seems feasible that you could use ScriptDirectory to iterate through all the revisions, check if each db needs to apply that revision, and then rather than context.run_migrations you could manually call command.upgrade(config, revision) to apply that one revision.

Django raw raises relation does not exist

i am getting a relation does not exist and I cant find a solution.
error:relation "sales_Oeslshstsql" does not exist
LINE 1: SELECT * FROM "sales_Oeslshstsql
(app name is sales)
model:
class Oeslshstsql(models.Model):
hst_prd = models.SmallIntegerField()
hst_year = models.SmallIntegerField()
cus_no = models.CharField(max_length=12)
item_no = models.CharField(max_length=15)
.....
a4glidentity = models.IntegerField(db_column='A4GLIdentity', primary_key = True)
class Meta:
managed = False
db_table = 'OESLSHST_SQL'
def __str__(self):
return (self.hst_year)
View:
def sales(request):
#sales_list = Oeslshstsql.objects.all().order_by('hst_year','hst_prd').reverse()
s = Oeslshstsql.objects.raw('SELECT * FROM "sales_Oeslshstsql"')
sales_list = s
return render(request,'saleslist.html',{'sales_list':sales_list})
The error is raised when s is evaluated. I tried switching cases in the select and messed with migrations no luck.
I am migrating an existing app to Django using a postgres backend, any help would be appreciated.
try:
s = Oeslshstsql.objects.raw('SELECT a4glidentity as id, ... , FROM "OESLSHST_SQL"')
http://docs.djangoproject.com/en/1.8/ref/models/options/#db-table seems your tablename in query is wrong
edit: you should add the primary key as id see https://docs.djangoproject.com/en/1.8/topics/db/sql/#mapping-query-fields-to-model-fields
Hi I had the same issue migrating an existing app to 1.11. The only solution I found was ....
Clear all all files from the app's migrations dir leaving only the init.py file
Make sure that the admin.py file is empty
Run manage.py makemigrations
Run manage.py sqlmigrate <app_label> 0001
copy the sql output
using pgAdminIII select "Execute arbitrary SQL queries"
Paste and execute the SQL statements in pgAdminIII
This was the only solution I could find, bit of a hack true, but worked. Hope it helps.This would also work via psql terminal I suppose, but I used pgAdmin

Why Doesn't Django Reset Sequences in SQLite3?

Why does Django allow you to reset the sequences (AutoID) fields on postgres and other DBMS's but not SQLite3?
Looking at the source code for the sql_flush method in django/db/backends/sqlite3/base.py, there is a comment that says:
Note: No requirement for reset of auto-incremented indices (cf. other sql_flush() implementations). Just return SQL at this point
I have a few tests where I load in fixture files that depend on absolute primary key ids. Because Django doesn't reset the auto id field for SQLite, these fixtures do not load correctly.
It appears that it is somewhat trivial to reset the auto id columns in sqlite: How can I reset a autoincrement sequence number in sqlite
You can monkey-patch sql_flush as follows to reset SQLite sequences:
from django.db.backends.sqlite3.operations import DatabaseOperations
from django.db import connection
def _monkey_patch_sqlite_sql_flush_with_sequence_reset():
original_sql_flush = DatabaseOperations.sql_flush
def sql_flush_with_sequence_reset(self, style, tables, sequences, allow_cascade=False):
sql_statement_list = original_sql_flush(self, style, tables, sequences, allow_cascade)
if tables:
# DELETE FROM sqlite_sequence WHERE name IN ($tables)
sql = '%s %s %s %s %s %s (%s);' % (
style.SQL_KEYWORD('DELETE'),
style.SQL_KEYWORD('FROM'),
style.SQL_TABLE(self.quote_name('sqlite_sequence')),
style.SQL_KEYWORD('WHERE'),
style.SQL_FIELD(self.quote_name('name')),
style.SQL_KEYWORD('IN'),
', '.join(style.SQL_FIELD(f"'{table}'") for table in tables)
)
sql_statement_list.append(sql)
return sql_statement_list
DatabaseOperations.sql_flush = sql_flush_with_sequence_reset
You would use it as follows for example in a TransactionTestCase:
from django.test import TransactionTestCase
class TransactionTestCaseWithSQLiteSequenceReset(TransactionTestCase):
reset_sequences = True
#classmethod
def setUpClass(cls):
super().setUpClass()
if connection.vendor == 'sqlite':
_monkey_patch_sqlite_sql_flush_with_sequence_reset()
This assures that tests that depend on fixed primary keys work with both SQLite and other database backends like PostgreSQL. However, see Django documentation for caveats regarding reset_sequences. For one thing, it makes tests slow.
Maybe this snippet will help:
import os
from django.core.management import call_command
from django.db import connection
from django.utils.six import StringIO
def reset_sequences(app_name):
os.environ['DJANGO_COLORS'] = 'nocolor'
buf = StringIO()
call_command('sqlsequencereset', app_name, stdout=buf)
buf.seek(0)
sql = "".join(buf.readlines())
with connection.cursor() as cursor:
cursor.execute(sql)
print("Sequences for app '{}' reset".format(app_name))

How to remove all of the data in a table using Django

I have two questions:
How do I delete a table in Django?
How do I remove all the data in the table?
This is my code, which is not successful:
Reporter.objects.delete()
Inside a manager:
def delete_everything(self):
Reporter.objects.all().delete()
def drop_table(self):
cursor = connection.cursor()
table_name = self.model._meta.db_table
sql = "DROP TABLE %s;" % (table_name, )
cursor.execute(sql)
As per the latest documentation, the correct method to call would be:
Reporter.objects.all().delete()
If you want to remove all the data from all your tables, you might want to try the command python manage.py flush. This will delete all of the data in your tables, but the tables themselves will still exist.
See more here: https://docs.djangoproject.com/en/1.8/ref/django-admin/
Using shell,
1) For Deleting the table:
python manage.py dbshell
>> DROP TABLE {app_name}_{model_name}
2) For removing all data from table:
python manage.py shell
>> from {app_name}.models import {model_name}
>> {model_name}.objects.all().delete()
Django 1.11 delete all objects from a database table -
Entry.objects.all().delete() ## Entry being Model Name.
Refer the Official Django documentation here as quoted below -
https://docs.djangoproject.com/en/1.11/topics/db/queries/#deleting-objects
Note that delete() is the only QuerySet method that is not exposed on a Manager itself. This is a safety mechanism to prevent you from accidentally requesting Entry.objects.delete(), and deleting all the entries. If you do want to delete all the objects, then you have to explicitly request a complete query set:
I myself tried the code snippet seen below within my somefilename.py
# for deleting model objects
from django.db import connection
def del_model_4(self):
with connection.schema_editor() as schema_editor:
schema_editor.delete_model(model_4)
and within my views.py i have a view that simply renders a html page ...
def data_del_4(request):
obj = calc_2() ##
obj.del_model_4()
return render(request, 'dc_dash/data_del_4.html') ##
it ended deleting all entries from - model == model_4 , but now i get to see a Error screen within Admin console when i try to asceratin that all objects of model_4 have been deleted ...
ProgrammingError at /admin/dc_dash/model_4/
relation "dc_dash_model_4" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "dc_dash_model_4"
Do consider that - if we do not go to the ADMIN Console and try and see objects of the model - which have been already deleted - the Django app works just as intended.
django admin screencapture
Use this syntax to delete the rows also to redirect to the homepage (To avoid page load errors) :
def delete_all(self):
Reporter.objects.all().delete()
return HttpResponseRedirect('/')
You can use the Django-Truncate library to delete all data of a table without destroying the table structure.
Example:
First, install django-turncate using your terminal/command line:
pip install django-truncate
Add "django_truncate" to your INSTALLED_APPS in the settings.py file:
INSTALLED_APPS = [
...
'django_truncate',
]
Use this command in your terminal to delete all data of the table from the app.
python manage.py truncate --apps app_name --models table_name
There are a couple of ways:
To delete it directly:
SomeModel.objects.filter(id=id).delete()
To delete it from an instance:
instance1 = SomeModel.objects.get(id=id)
instance1.delete()
// don't use same name
Actually, I un-register the model (the table data that I want to delete) from the admin.py. Then I migrate.
python manage.py makemigrations
python manage.py migrate
python runserver
Then I register the model in the admin.py and do migration again. :) Now, the table is empty. This might not be a professional answer, but it helped me.

Categories

Resources