I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??
InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit.
Python DB API disables autocommit by default
Pasted from google (first page, 2nd result)
MySQL :: MySQL 5.0 Reference Manual :: 13.2.8 The InnoDB ...
By default, MySQL starts the session for each new connection with autocommit ...
dev.mysql.com/.../innodb-transaction-model.html
However
Apparently Python starts MySQL in NON-autocommit mode, see:
http://www.kitebird.com/articles/pydbapi.html
From the article:
The connection object commit() method commits any outstanding changes in the current transaction to make them permanent in the database. In DB-API, connections begin with autocommit mode disabled, so you must call commit() before disconnecting or changes may be lost.
Bummer, dunno how to override that and I don't want to lead you astray by guessing.
I would suggest opening a new question titled:
How to enable autocommit mode in MySQL python DB-API?
Good luck.
Related
I know this issue is not a new one on SO but I'm unable to find a solution. Whenever I return to my desk after leaving my app running overnight, I get a MySQL server has gone away error that persists until I restart my uwsgi service. I've already done the following:
pool_recycle=some really large number in my create_engine() call
Added a ping_connection() after a #event.listens_for() decorator (and I can't use pool_pre_ping - that breaks my create_engine() call)
in /etc/my.cnf I added wait_timeout and interactive_timeout params with large values
but nothing has had any effect.
From the sqlalchemy doc located here, the pool_recycle feature is what you are looking for.
from sqlalchemy import create_engine
engine = create_engine("mysql://scott:tiger#localhost/test", pool_recycle=28700)
Set pool_recycle to a value < wait_timeout in your mysql configuration file my.cnf
MySQL default wait_time is 28800 (8 hrs)
Dont forget to restart your services (i.e. mysql, etc) if you do modify the conf files
I am running two terminal sessions, in the first one I've opened psql, and in the second one ipython with psycopg2 imported.
I'm connected to the same db in both sessions. When I update a table through ipython/psycopg2, psql session queries won't reflect the updates (i.e. I add a row in a table via psycopg2, and psql still fetches no rows).
What am I doing wrong?
Probably, after executing update you didn't execute commit() (it makes the changes to the database persistent) on the connection object.
See the first example in the docs http://initd.org/psycopg/docs/usage.html
I use flask and peewee. Sometimes peewee throws this error
MySQL server has gone away (error(32, 'Broken pipe'))
Peewee database connection
db = PooledMySQLDatabase(database,**{
"passwd": password, "user": user,
"max_connections":None,"stale_timeout":None,
"threadlocals" : True
})
#app.before_request
def before_request():
db.connect()
#app.teardown_request
def teardown_request(exception):
db.close()
After mysql error that "MySQL server has gone away (error(32, 'Broken pipe'))", select queries works without problem, but insert,update,delete queries don't work.
On insert,update,delete queries works behind(in mysql) but peewee throw this errors.
(2006, "MySQL server has gone away (error(32, 'Broken pipe'))")
The peewee documentation has talked about this problem, here is the link: Error 2006: MySQL server has gone away
This particular error can occur when MySQL kills an idle database connection. This typically happens with web apps that do not explicitly manage database connections. What happens is your application starts, a connection is opened to handle the first query that executes, and, since that connection is never closed, it remains open, waiting for more queries.
So you have some problems on managing your database connection.
Since I can't reproduce your problem, could you please try this one, close your database this way:
#app.teardown_appcontext
def close_database(error):
db.close()
And you may get some info from the doc: Step 3: Database Connections
I know this is an old question, but since there's no accepted answer I thought I'd add my two cents.
I was having the same problem when committing largeish amounts of data in Peewee objects (larger than the amount of data MySQL allows in a single commit by default). I fixed it by changing the max_allowed_packet size in my.conf.
To do this, open my.conf, add the following line under [mysqld]:
max_allowed_packet=50M
... or whatever size you need and restart mysqld
I know this is an old question, but I also fixed the problem in another way which might be of interest. In my case, it was an insert_many which was too large.
To fix it, simply do the insert in batches, as described in the peewee documentation
I use Mysql5.6, Mysqldb_1.2.4, python_2.7.3 with win7 system. As the title says, I've modified Mysql's configure file my.ini by changing charset.
After restarting mysql service, I use Mysql commandline:
When I create a database in mysql workbench, i use this collation--utf8mb4_unicode_ci:
However, I get this:
I really really wanna know how to create a utf8mb4 database, because I need the fix emoj characters inserted into database problem, which I think can be solved if I can create a utf8mb4 database.
I just fixed this problem, by executing("SET NAMES utf8mb4") after creating connection with your database and acquiring cursor object.
I am having a trouble when applying a django south migration:
As always, I executed the migrate command after a successful schemamigration
python manage.py migrate webapp
The log console:
Running migrations for webapp:
- Migrating forwards to 0020_auto__add_example.
> webapp:0020_auto__add_example
TransactionManagementError: Transaction managed block ended with pending COMMIT/ROLLBACK
The error is not related with the specific migration as if I move backwards and try another it shows the same message.
Edit. This is the log of the query:
(0.005) SELECT `south_migrationhistory`.`id`, `south_migrationhistory`.`app_name`, `south_migrationhistory`.`migration`, `south_migrationhistory`.`applied` FROM `south_migrationhistory` WHERE `south_migrationhistory`.`applied` IS NOT NULL ORDER BY `south_migrationhistory`.`applied` ASC; args=()
Running migrations for webapp:
- Migrating forwards to 0020_auto__add_example.
> webapp:0020_auto__add_example
(0.002) CREATE TABLE ROLLBACK_TEST (X INT); args=()
TransactionManagementError: Transaction managed block ended with pending COMMIT/ROLLBACK
I just ran into a similar issue.
MySQL 5.6.13 (on Amazon RDS)
Django==1.5.4
MySQL-python==1.2.4
South==0.8.2
I went through almost every possible fix here and through countless Google searches with zero luck.
I looked at the database schema and a table I had not created named 'ROLLBACK_TEST' was part of the schema.
Once I dropped that mystery table the migration ran flawlessly.
This table could only have originated via Django, South or possibly an internal process at Amazon as nothing else has access.
I had the same problem with Django 1.6 and South 1.0 on a MySQL instance. After turning on the django.db.backends logger I realised the migration was stuck on the following SQL statement:
DEBUG (0.003) CREATE TABLE ROLLBACK_TEST (X INT); args=None
So I checked the database and sure enough found the ROLLBACK_TEST table. Deleting it resolved the problem:
$ manage.py dbshell
mysql> DROP TABLE ROLLBACK_TEST;
I had the same problem and was banging my head for a while.
It turns out my (MySQL) database user didn't have sufficient privileges.
I assigned: ALTER, CREATE, DELETE, DROP, INDEX, INSERT, SELECT, UPDATE to the user and everything worked fine.
I had the same problem and for me the solution was simply to give the proper rights of my sqlite development.db file to the user who was executing the python manage.py migrate webapp command. I had the file owned by www-data and hence couldn't work on the file.
I am writing the answer to the problem I had as it can be useful for somebody.
After some time of debugging I found that the problem was not related with django. It was an issue with the database and the virtual machine that hosts it.
I restarted the database machine and the migrations are now working.
When I came to the same issue my problem was more or less related to django. I explain.
I was working with different tabs in my console. One was used with a django shell to test my models and in another tab I run the migrations. I came to an integrity error in my shell tab. So, until I solved the problem (see this thread) or closed the tab, the error in migration tab persisted. As the former answer pointed out, this was something related to the DB -but not DB's fault.