I am having a trouble when applying a django south migration:
As always, I executed the migrate command after a successful schemamigration
python manage.py migrate webapp
The log console:
Running migrations for webapp:
- Migrating forwards to 0020_auto__add_example.
> webapp:0020_auto__add_example
TransactionManagementError: Transaction managed block ended with pending COMMIT/ROLLBACK
The error is not related with the specific migration as if I move backwards and try another it shows the same message.
Edit. This is the log of the query:
(0.005) SELECT `south_migrationhistory`.`id`, `south_migrationhistory`.`app_name`, `south_migrationhistory`.`migration`, `south_migrationhistory`.`applied` FROM `south_migrationhistory` WHERE `south_migrationhistory`.`applied` IS NOT NULL ORDER BY `south_migrationhistory`.`applied` ASC; args=()
Running migrations for webapp:
- Migrating forwards to 0020_auto__add_example.
> webapp:0020_auto__add_example
(0.002) CREATE TABLE ROLLBACK_TEST (X INT); args=()
TransactionManagementError: Transaction managed block ended with pending COMMIT/ROLLBACK
I just ran into a similar issue.
MySQL 5.6.13 (on Amazon RDS)
Django==1.5.4
MySQL-python==1.2.4
South==0.8.2
I went through almost every possible fix here and through countless Google searches with zero luck.
I looked at the database schema and a table I had not created named 'ROLLBACK_TEST' was part of the schema.
Once I dropped that mystery table the migration ran flawlessly.
This table could only have originated via Django, South or possibly an internal process at Amazon as nothing else has access.
I had the same problem with Django 1.6 and South 1.0 on a MySQL instance. After turning on the django.db.backends logger I realised the migration was stuck on the following SQL statement:
DEBUG (0.003) CREATE TABLE ROLLBACK_TEST (X INT); args=None
So I checked the database and sure enough found the ROLLBACK_TEST table. Deleting it resolved the problem:
$ manage.py dbshell
mysql> DROP TABLE ROLLBACK_TEST;
I had the same problem and was banging my head for a while.
It turns out my (MySQL) database user didn't have sufficient privileges.
I assigned: ALTER, CREATE, DELETE, DROP, INDEX, INSERT, SELECT, UPDATE to the user and everything worked fine.
I had the same problem and for me the solution was simply to give the proper rights of my sqlite development.db file to the user who was executing the python manage.py migrate webapp command. I had the file owned by www-data and hence couldn't work on the file.
I am writing the answer to the problem I had as it can be useful for somebody.
After some time of debugging I found that the problem was not related with django. It was an issue with the database and the virtual machine that hosts it.
I restarted the database machine and the migrations are now working.
When I came to the same issue my problem was more or less related to django. I explain.
I was working with different tabs in my console. One was used with a django shell to test my models and in another tab I run the migrations. I came to an integrity error in my shell tab. So, until I solved the problem (see this thread) or closed the tab, the error in migration tab persisted. As the former answer pointed out, this was something related to the DB -but not DB's fault.
Related
I have a little Django app that uses PyMongo and MongoDB.
If I write (or update) something in the database, I have to restart the server for it to show in the web page. I'm running with 'python manage.py runserver'
I switched to the django dummy cache but that didn't help.
Every database action is within an 'with MongoClient' statement.
I figured it out. I read in the data in the django_tables2 class variables. So it was never refreshed...
Bangs forehead on desk...
I have been through numerous other posts (to name only a few) but still stuck. The configuration is simple enough that I'll detail everything, though I think only a few of the following are relevant:
Running psql as user postgres on Ubuntu 16.04, I've created database qedadmin with 3 tables: networks, devices, and settings; there's also a sequence relation networks_networkid_seq owned by networks.networkId.
I am writing a python script to run on the same server using psycopg2 which needs to execute a SELECT on the settings table. Many examples show scripts connecting as user 'postgres', and indeed this works perfectly for me. However, I suppose it's better to use a less-privileged user for this sort of thing, so I created a user qedserver with a password.
Using this new user and password and localhost in the psycopg2 connection string, I can successfully get a connection and a cursor (and if I use incorrect user or password values, the connection fails, so I know the defined user and password and the authentication from python are all working). However, all of my attempts to execute a SELECT statement on table settings are returning code 42501: permission denied for relation settings.
I initially granted user qedserver only SELECT privileges, and only on table settings:
GRANT SELECT ON settings TO qedserver;
Since that did not work, I've gradually escalated privileges to the point that now user qedserver has ALL PRIVILEGES on all 3 tables, on the sequence relation, and on the database:
GRANT ALL PRIVILEGES ON settings TO qedserver;
GRANT ALL PRIVILEGES ON devices TO qedserver;
GRANT ALL PRIVILEGES ON networks TO qedserver;
GRANT ALL PRIVILEGES ON networks_networkid_seq TO qedserver;
GRANT ALL PRIVILEGES ON DATABASE qedadmin TO qedserver;
but I am still getting "permission denied for relation settings".
To be clear, changing just the connection string in my python script from one for user postgres to one for user qedserver makes the difference between success and failure, so I am not providing python code because I think it's irrelevant (but I can do so if you think it would help).
What am I missing?
Edited to add: there is no linux user named qedserver; I don't think there needs to be but perhaps I'm wrong about that? (further edit: I just did this experiment and it made no difference.)
Updates: Per #klin's comment and link, I can see that all of the privileges have been successfully granted: qedserver has privileges arwdDxt on the networks, devices, and settings tables, and rwU privileges on the networks_networkid_seq sequence relation; and \l reports Access Privileges on the qedadmin database of =Tc/postgres + postgres=CTc/postgres + qedserver=CTc/postgres.
I have also edited the config file (/etc/postgresql/10/main/postgresql.conf on my system) to set log_error_verbosity=VERBOSE and sent a SIGHUP to the postgresql process to re-read the config file. This added another line to the error log (/var/log/postgresql/postgresql-10-main.log on my system) for each failed attempt; now the log shows (the new line is the middle one):
qedserver#qedadmin ERROR: 42501: permission denied for relation settings
qedserver#qedadmin LOCATION: aclcheck_error, aclchk.c:3410
qedserver#qedadmin STATEMENT: SELECT * FROM settings WHERE deviceId = 10020;
What else can I look at or try to make headway?
Editing 4 months later to add: this was all for a new project for which I thought it would be advantageous to use postgresql for a few reasons; hitting this problem so early in development and being unable to resolve it over several days, I gave up and went with mysql instead. Not exactly a "solution" to the OP so I'm not adding it as an answer...
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk.
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot.
I had the same problem and came up with a workaround. In your python code just add and run the following method when deploying your app the next time:
FOR SQLALCHEMY AFTER VERSION 2.0
from sqlalchemy import create_engine, text
tables = ["table1_name", "table2_name"] # the names of the tables you want to delte
engine = create_engine("sqlite:///example.db") # here you create your engine
def delete_tables(tables):
for table in tables:
sql = text(f"DROP TABLE IF EXISTS {table} CASCADE;") # CASCADE deltes the tables even if they had some connections to other tables
with engine.connect() as connection:
with connection.begin():
connection.execute(sql)
delete_tables(tables) # Comment this line out after running it once.
FOR SQLALCHEMY BEFORE VERSION 2 (I guess)
def delete_tables(tables):
for table in tables:
engine.execute(f"DROP TABLE IF EXISTS {table} CASCADE;")
delete_tables(tables) # Comment this line out after running it once.
After you deployed and ran this code 1 time, all your tables will be deleted.
IMPORTANT: Delete or comment out this code after that, otherwise you will delete all your tables every time when you deploy your code to AWS
I have a PostgreSQL schema that resides in a schema.sql file that gets run each time a database connection is initiated in Python. It looks something like:
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
facebook_id TEXT NOT NULL,
name TEXT NOT NULL,
access_token TEXT,
created TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
The app is deployed on Heroku, using their PostgreSQL and everything works as expected.
Now, what if I want to change a bit the structure of my users table? How can I do this the easiest and the best way? I thought of writing an ALTER... line in schema.sql for each change I want to produce in the database, but I don't think this is the best approach, since after some time the schema file will be full of ALTERs and it will slow down my app.
What's the indicated way to deploy changes made to a database?
Running a hard-coded script on each connection is not a great way to handle schema management.
You need to either manage the schema manually, or use a full-fledged tool that keeps a schema version identifier in the database, checks that, and applies a script to upgrade to the next schema version if it's different to the latest one. Rails calls this "migrations" and it kind-of works. If you're using Django it has schema management too.
If you're not using a framework like that, I suggest just writing your own schema upgrade scripts. Add a "schema_version" table with a single row. SELECT it when the app first starts after a redeploy and if it's lower than the current version the app knows about, apply the update script(s) in order, eg schema_1_to_2, schema_2_to_3, etc.
I don't recommend doing this on connect, do it on app start, or better, as a special maintenance command. If you do it on every connection you'll have multiple connections trying to make the same changes and you'll land up with duplicated columns and all sorts of other mess.
I support several django apps on heroku with Postgres. I just connect via PgAdmin and run my scripts when changes are required. I don't see any need for running a script every time a connection is made.
I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??
InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit.
Python DB API disables autocommit by default
Pasted from google (first page, 2nd result)
MySQL :: MySQL 5.0 Reference Manual :: 13.2.8 The InnoDB ...
By default, MySQL starts the session for each new connection with autocommit ...
dev.mysql.com/.../innodb-transaction-model.html
However
Apparently Python starts MySQL in NON-autocommit mode, see:
http://www.kitebird.com/articles/pydbapi.html
From the article:
The connection object commit() method commits any outstanding changes in the current transaction to make them permanent in the database. In DB-API, connections begin with autocommit mode disabled, so you must call commit() before disconnecting or changes may be lost.
Bummer, dunno how to override that and I don't want to lead you astray by guessing.
I would suggest opening a new question titled:
How to enable autocommit mode in MySQL python DB-API?
Good luck.