Schema Migration after dropping table - python

I added a column to my models.py and it was giving me issues. On the road to trying to solve the problems I've done a couple things.
Dropped the table: ./manage.py sqlclear app | ./manage.py dbshell
Tried to "reset" the schema: ./manage.py schemamigration app --initial
Tried to migrate: ./manage.py migrate app
After doing all these things, I get this error after trying to migrate:
FATAL ERROR - The following SQL query failed: CREATE TABLE "projects_project" ("id" integer NOT NULL PRIMARY KEY)
The error was: table "projects_project" already exists
Question: How do I repair my database? I don't care about any of the data in the db.
Edit:
One of the related posts took me to this link Django South - table already exists . Apparently if you fake the migration all is well.
./manage.py migrate myapp --fake
I'm still unsure of all the reprocussions of this but I guess thats what docs are for.

Welll, the error points it out pretty clearly:
table "projects_project" already exists
You can either do it the quick and dirty way and drop the table. In that case log into your DMBS. If it's MySQL, you'll just open the terminal and typ:
mysql -u root -p YOURPASSWORD
Then select the database:
use your_database;
Finally, drop the table:
DROP TABLE projects_project;
You should be able to migrate now.
The elegant way would be to undo the migration. But every framework has it's own way to do that. You need to figure that out first - or give us more information.

Related

OperationalError: cursor "_django_curs_<id>" does not exist

We have an online store web-app which is powered by django, postgresql and heroku.
For a specific campaign (you can think a campaign like a product to purchase), we have sold 10k+ copies successfully. Yet some of our users are encountered this error according to our Sentry reports. Common specification of these users is; none of them have address information before the purchase. Generally, users fill out address form right after registering. If they don't, they need to fill the form while purchasing the product and submit them together.
This is how the trace looks like:
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
(66 additional frame(s) were not displayed)
...
File "store/apps/store_main/templatetags/store_form_filters.py", line 31, in render_form
return render_to_string('widgets/store_form_renderer.html', ctx)
File "store/apps/store_main/templatetags/store_form_filters.py", line 20, in render_widget
return render_to_string('widgets/store_widget_renderer.html', ctx)
File "store/apps/store_main/widgets.py", line 40, in render
attrs=attrs) + "<span class='js-select-support select-arrow'></span><div class='js-select-support select-arrow-space'><b></b></div>"
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
So another weird common thing, there are exception messages between sql queries before the failure. You can see it in the image below:
I'm adding it if they are somehow related. What may also be related is, the users who get this error are the users who tries to purchase the campaign right after a bulk mailing. So, extensive traffic might be the reason yet we are also not sure.
We asked Heroku about the problem since they are hosting the postgres, yet they do not have any clue either.
I know the formal reason of this error is trying to reach the cursor after a commit. Since it is destroyed after transaction, trying to reach it cause this error yet I don't see this in our scenario. We are not touching the cursor in any way. What am I missing? What may produce this error? How to prevent it? Any ideas would be appreciated.
The reason for your error might be that you added fields to a model and forgot to makemigrations and migrate.
Have a look at this answer: When in run test cases then I will get this error: psycopg2.OperationalError: cursor "_django_curs_140351416325888_23" does not exist
If you're using django-pytest and have enabled the optimization --reuse-db and have made DB migrations between test runs you need to re-create the DB tables again.
pytest --create-db
Most likely you have forgotten to makemigrations and migrate them to the database.
If you are sure you did make the migrations and running python manage.py makemigrations and python manage.py migrate will not find the changes you made then the database and models are not in sync.
Sometimes this situation can be very frustrating if you have a big database. You will need to manually inspect your models.
To help out you can try this trick, which has been working for me.
Step 1 (delete all migrations from apps)
I am going to assume you are using the unix terminal. run sudo rm -rv */migrations/*. This removes all the migrations files and caches.
Step 2 (make the migration folders in each app)
run the command mkdir <app-folder>/migrations && touch <app-folder>/__init__.py. Replace with the name of the app that you have in the INSTALLED_APPS list in your default django settings file.
Step 3 (Make Migrations)
Here we populate the migrations folders in each app with migration files. Run python manage.py makemigrations. Run the second command. python manage.py migrate --fake. We are using the --fake flag because the data already exists in the database and so we do not want to populate name database tables that are already there or you will be greeted with the already exists error
If this did not work, then you will need to temper with some additional fields on the database. such as the django_migrations or a similarly named table. This is not recommended as there are other database tables that depend on it such as the django_contenttypes and this will throw you in a chain of related tables that you will manually inspect which is very painful.

Error when attempting loaddata on django database dump

I started a new Django project, the database I had before was still relevant so I want to merge it over to the new one using dumpdata & loaddata.
I am using the out of the box database django comes with: sqllite.
The problem is when I use loaddata I get this error:
bad_row[1], referenced_table_name, referenced_column_name,
django.db.utils.IntegrityError: Problem installing fixtures: The row in table 'django_admin_log' with primary key '1' has an invalid foreign key: django_admin_log.content_type_id contains a value '7' that does not have a corresponding value in django_content_type.id.
The steps I followed to get here are:
Copied and migrated the models page from my first project to my new one.
python3 manage.py dumpdata admin > db.json --indent 2
python3 manage.py loaddata db.json
tldr; My goal is to take the data from the old database in another project and put it in the new project's database.

Cannot complete Flask-Migration

I've setup a local Postgres DB with SQLAlchemy and cannot commit my first entry. I keep on getting this error...
ProgrammingError: (ProgrammingError) relation "user" does not exist
LINE 1: INSERT INTO "user" (name, email, facebook_id, facebook_token...
It seems like the fields aren't matching to those in the database. I'm trying to migrate using flask-migrate but, when I run $ python app.py db migrate I get this error...
raise util.CommandError("No such revision '%s'" % id_)
alembic.util.CommandError: No such revision '39408d6b248d'
It may be best to delete everything and start from scratch as it seems I have botched my database setup and / or migration but I'm not sure how to.
UPDATE: The database has started working now (I dropped and created it again). However, I'm still getting the same error trying to run migrations and it turns out the "no such revision '39408d6b248d' is referring to a migration from an unrelated project. I re-installed flask-migrate but same error.
flask-migrate will create a table named "alembic_version" in your database.
so you should drop this table and delete migrations folder in your project.
and then use $ python app.py db init again...
I think $ python app.py db migrate will work fine.
Alembic keeps the migration history in your database, this is why it still recognises there is another revision there. I keep my project on Heroku so I was able to just do heroku pg:pull ... to be able to get a new copy of my database. Prior to this you will have to drop your local db. In case you don't want to drop your local, I think dropping the table should work too. I use the PG Commander as a GUI tool to quickly browse my databases.
the first step to do is run this command
alembic current
you should get an error as mentioned above (the goal is to make sure that this command returns a valid response).
the reason why u're getting this is bc alembic is confused about your current state.. it's assuming that you should be in revision 39408d6b248d but then decides that that revision is invalid.
to investigate this, let's find out which revisions are considered valid by alembic, run this command:
alembic history --verbose
you'll get a list of all previous revisions (note: it's a good idea to attach a message beside each revision.. think about it as a good git commit message)
Rev: 594cc72f56fd (head)
Parent: 262f40e28682
Path: ***************
adjust context_id in log table so that it is a substring of the object_id
Revision ID: 594cc72f56fd
Revises: 262f40e28682
Create Date: 2015-07-22 14:31:52.424862
Rev: 262f40e28682
Parent: 1dc902bd1c2
Path: ***************
add context_id column to log table
Revision ID: 262f40e28682
Revises: 1dc902bd1c2
Create Date: 2015-07-22 11:05:37.654553
Rev: 1dc902bd1c2
Parent: <base>
Path: ***************
Initial database setup
Revision ID: 1dc902bd1c2
Revises:
Create Date: 2015-07-06 09:55:11.439330
the revision 39408d6b248d clearly doesn't exist in the above revisions. This revision is stored in the alembic_table in the database.. you can verify by going to your dbase and running:
$ select * from alembic_version;
version_num
--------------
57ac999dcaa7
so now you should review the state of your database and see where it fits vis-a-vis the revisions outputted above:
in my case, by poking around my dbase it becomes obvious which revision i'm in right now.. which is that dbase has been setup,, but the other revisions haven't been included yet.
so now i replace the value on the dbase with the one i found from the history command above:
vibereel=> update alembic_version set version_num = '1dc902bd1c2';
and now running alembic current returns
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
1dc902bd1c2
done.
It means that the entry in table alembic_version of your db is "39408d6b248d" and there's no migration file related to it in migrations folder (by default migrations/versions).
So better drop the table alembic_version from your db and do
$ python app.py db history to get the new head of migrations, say, 5301c31377f2
Now run $ python app.py db stamp 5301c31377f2 to let alembic know that it's your migration head (which gets stored in table alembic_version).
Assuming that you have checked that the database exists using psql or pgAdmin, this error usually means exactly what it says. That can be due to either:
not connecting to the correct database instance (check your db url: host/port and db name)
not correctly configuring SQLAlchemy (see: SQLAlchemy create_all() does not create tables)
I got the same error yesterday, for my case the revision number '39408d6b248d' is due to your previous migration upgrade actions, each time your ran upgrade script then an Alembic version will be generated and stored in data.sqlite.
you must have done any upgrade which generated above '39408d6b248d', but then you deleted whole migrations/ directory and removed all upgrade scripts. The database e.g. data.sqlite still stores '39408d6b248d' but there aren't according migration script exists.
for my solution I delete whole Alembic versions in database and did all upgrades from scratch.
I ran into a similar issue. After performing the python manage.py db migrate command the database tables were not created, there was only an alembic table in the database.
I found the solution in the flask-migrate documentation:
https://flask-migrate.readthedocs.org/en/latest/
The migration script needs to be reviewed and edited, as Alembic
currently does not detect every change you make to your models. In
particular, Alembic is currently unable to detect indexes. Once
finalized, the migration script also needs to be added to version
control.
Then you can apply the migration to the database:
python manage.py db upgrade
This command created the tables and applied the database migrations.
I had the same issue. According to the version in alembic_version table in db, migration action is looking for that version in /migrations/versions folder which has already been deleted. Therefore solution is to delete alembic_version table:
If you are using sqlite,
1. Open your xxx.sqlite db file.
sqlite3 xxx.sqlite
2. check tables
.tables
3. you will see alembic_version, delete it
DROP TABLE alembic_version

manage.py syncdb fails to create auth_user table

Hello fellow stackoverflowers,
I am trying to follow the django intro tutorial using nitrous.io
When I run manage.py syncdb it creates a few tables till it hits the auth_user table.
Then it throws the following error:
Creating table auth_user
DatabaseError: (1114, "The table 'auth_user' is full")
I don't know how to fix this error.
I am running mysql 5.6.13
Could someone please take the time to help!
Thanks a lot for taking the time.
It looks similar to another question (ERROR 1114 (HY000): The table is full)
I would suggest you to try the same fix by changing/adding the following line in your my.cnf:
innodb_data_file_path = ibdata1:10M:autoextend:max:512M

Manage.py sqlall <myapp> does not create column

I added orgname = models.CharField(max_length=50) to my an existing class in my models.py and I ran python manage.py syncdb but figured out that it doesn't create columns (I'm using PostgreSQL by the way), so I needed to do python manage.py sqlall <myapp> which I did and it outputted the following:
BEGIN;
CREATE TABLE "file_uploader_files" (
"id" serial NOT NULL PRIMARY KEY,
"file" varchar(100) NOT NULL,
"orgname" varchar(50) NOT NULL
)
;
COMMIT;
Yet, when I go into the shell for Django or look in pgAdmin3, the column is still not created. What am I doing wrong? I'd add it manually but I'm not sure how.
P.S. The table was already created before hand and so was the file varchar, orgname came after I initial made that column.
The documentation for the sqlall command says:
Prints the CREATE TABLE and initial-data SQL statements for the given app name(s).
It prints the SQL, it doesn't run anything. Django will never modify your schema, you'll need to do it yourself - the output above can help by showing you the type of the orgname field. Or use something like South.
Also see this SO question: update django database to reflect changes in existing models (the top two answers cover your question).
From the accepted answer:
note: syncdb can't update your existing tables. Sometimes it's impossible to decide what to do automagicly - that's why south scripts are this great.
And in the other another answer...
python manage.py reset <your_app>
This will update the database tables for your app, but will completely destroy any data that existed in those tables.

Categories

Resources