I'm looking to write tests for my application. I would like to work with a clean database for all my tests. For various reasons, I cannot create a separate test database.
What I currently do is run everything in a transaction and never commit to the db. However some tests read from the db, so I'd like to delete all rows at the start of the transaction and start from there.
The problem I am running into is with foreign key constraints. Currently I just go through each table and do
cursor.execute("DELETE FROM %s" % tablename)
which gives me
IntegrityError: (1451, u'Cannot delete or update a parent row: a
foreign key constraint fails (`testing`.`app_adjust_reason`,
CONSTRAINT `app_adjust_reason_ibfk_2` FOREIGN KEY (`adjust_reason_id`)
REFERENCES `adjust_reason` (`id`))')
edit: I would like something generic that could be applied to any database. Otherwise I would specifically drop the constraints
A more general approach is to create a database from scratch before the test run and drop it after, use CREATE DATABASE_NAME and DROP DATABASE_NAME. This way, you are always starting with a clean database state and you would not worry about the foreign key or other constraints.
Note that you would also need to create your table schema and (possibly test data) after you create a database.
As a real world example, this is what Django does when you run your tests. The table schema is recreated by the Django ORM from your models, then the fixtures or/and the schema and data migrations are applied.
Related
I have inserted data into table from postgresql directly. Now when I try to insert data from django application, it's generating primary key duplication error. How can I resolve this issue?
Run
python manage.py sqlsequencereset [app_name]
and execute all or just one for the required table SQL statements in the database to reset sequences.
Explanation:
You probably inserted with primary keys already present in it, not letting postgresql to auto-generate ids. This is ok.
This means, internal Postgresql sequence used to get next available id has old value. You need to rest with sequence to start with maximum id present in the table.
Django manage.py has command intended just for that - print sql one can execute in db to reset sequences.
I think problem is not in database. please check your django code probably you use get_or_create
I have a job that does some work on a copy of a table corresponding to a Django model, and then replaces the working table with the copy when done.
The problem is that although the copy of the table picks up all of the indexes and everything else, it's not picking up the foreign key constraints.
Can I just add them back when I swap the table in? Or does South or Django depend on anything in the constraint name?
I'm on MySQL and Django 1.8.
(Let's assume I'm not able to change how the job works)
When trying to run my tests (python manage.py test) I am getting:
CommandError: Database test_db couldn't be flushed. Possible reasons:
* The database isn't running or isn't configured correctly.
* At least one of the expected database tables doesn't exist.
* The SQL was invalid.
Hint: Look at the output of 'django-admin.py sqlflush'. That's the SQL this command wasn't able to run.
The full error: cannot truncate a table referenced in a foreign key constraint
DETAIL: Table "install_location_2015_05_13" references "app".
HINT: Truncate table "install_location_2015_05_13" at the same time, or use TRUNCATE ... CASCADE.
I am using partitions in our project which are generated on the fly via a python function (so I can run it periodically). I don't have any models for these partitions.
The partition maintenance function is invoked after syncdb triggers the post_syncdb signal (so it is executed when the test database is set up).
How can I make Django clear the additional tables (partitions)? or
How can I tell Django to use CASCADE while running the tests?
Mainly problem occur where we change relation in M2M field. Old constraint remain and new constraint for new relation was created. Fixed in Django 1.8
I am trying to use alembic migrations to act on different versions of the same database. An example would be that I have two databases, one live and one for testing. Each of them might be in different states of migration. For one, the test database might not exist at all.
Say live has a table table1 with columns A and B. Now I would like to add column C. I change my model to include C and I generate a migration script that has the following code
op.add_column('table1', sa.Column('C', sa.String(), nullable=True))
This works fine with the existing live database.
If I now call alembic upgrade head referring to a non-existing test database, I get an (Operational Error) duplicate column name... error. I assume this is due to my model containing the C column and that alembic/sqlalchemy creates the full table automatically if it does not exist.
Should I simply trap the error or is there a better way of doing this?
I would suggest that immediately after your test db is newly created you should stamp it with head
command.stamp(configs_for_test_db, 'head')
This will go ahead and insert the head revision number into the appropriate alembic table without actually running migrations so that the revision number will reflect the state of the db (namely that your newly created db is up to date wrt migrations). After the db is stamped, alembic upgrade should behave properly.
I added a url column in my table, and now sqlalchemy is saying 'unknown column url'.
Why isn't it updating the table?
There must be a setting when I create the session?
I am doing:
Session = sessionmaker(bind=engine)
Is there something I am missing?
I want it to update any table that doesn't have a property that I added to my Table structure in my python code.
I'm not sure SQLAlchemy supports schema migration that well (atleast the last time I touched it, it wasn't there).
A couple of options.
Don't manually specify your tables. Use the autoload feature to have SQLAlchemy automatically read out the columns from your database. This will require tests to make sure that it works but you get the idea in generally. DRY.
Try SQLAlchemy migrate.
Manually update the table after you change the model specification.
In cases when you just add new columns, you can safely use metadata.create_all(bind=engine) to update your schema from your class definitions.
However this will NOT modify existing columns or remove columns from DB schema if you remove them in SQLA definition.