Django and Postgres transaction rollback - python

I have a piece of code that works in a background process which looks like
from django.db import transaction
try:
<some code>
transaction.commit()
except Exception, e:
print e
transaction.rollback()
In a test, I break <some_code> with data that causes a database error. The exception is following
File "/home/commando/Development/Diploma/streaminatr/stream/testcases/feeds.py", line 261, in testInterrupt
form.save(self.user1)
File "/usr/lib/pymodules/python2.5/django/db/transaction.py", line 223, in _autocommit
return func(*args, **kw)
File "/home/commando/Development/Diploma/streaminatr/stream/forms.py", line 99, in save
print(models.FeedChannel.objects.all())
File "/usr/lib/pymodules/python2.5/django/db/models/query.py", line 68, in `__repr__ `
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/usr/lib/pymodules/python2.5/django/db/models/query.py", line 83, in `__len__ `
self._result_cache.extend(list(self._iter))
File "/usr/lib/pymodules/python2.5/django/db/models/query.py", line 238, in iterator
for row in self.query.results_iter():
File "/usr/lib/pymodules/python2.5/django/db/models/sql/query.py", line 287, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/lib/pymodules/python2.5/django/db/models/sql/query.py", line 2369, in execute_sql
cursor.execute(sql, params)
InternalError: current transaction is aborted, commands ignored until end of transaction block
This is what I expect. The bad thing is that I still get the same error when I try to access the DB after transaction.rollback is called. What should I do to rollback the transaction succcessfully and make the connection usable once again?
Btw, I also tried inserting print connection.queries to debug the code, and it always returns an empty list. Could it be that Django is using some other DB connection?
The code is run outside of request-response cycle. I tried switching TransactionMiddleware on and off, but it gave no effect.
I am using Django 1.1 and Postgres 8.4.

Default TestCase does not know anything about transactions, you need to use TransactionalTestCase in this case.

I wrote this decorator based on the transaction middleware source. Hope it helps, works perfectly for me.
def djangoDBManaged(func):
def f(*args, **kwargs):
django.db.transaction.enter_transaction_management()
django.db.transaction.managed(True)
try:
rs = func(*args, **kwargs)
except Exception:
if django.db.transaction.is_dirty():
django.db.transaction.rollback()
django.db.transaction.leave_transaction_management()
raise
finally:
if django.db.transaction.is_managed():
if django.db.transaction.is_dirty():
django.db.transaction.commit()
django.db.transaction.leave_transaction_management()
return rs
# So logging gets the right call info whatever the decorator order is
f.__name__ = func.__name__
f.__doc__ = func.__doc__
f.__dict__ = func.__dict__
return f

Related

Django: Overriding Postgres Database wrapper to set search path throws error

I want to establish a connection with a postgres database in django. As it doesn't provide support for postgres schemas I am trying to set the search_path immediately after establishing a connection. To achieve this I have subclassed the DatabaseWrapper class and overridden the _cursor method as below:
from django.db.backends.postgresql.base import DatabaseWrapper
class DatabaseWrapper(DatabaseWrapper):
def __init__(self, *args, **kwargs):
super(DatabaseWrapper, self).__init__(*args, **kwargs)
def _cursor(self, name=None):
cursor = super(DatabaseWrapper, self)._cursor(name)
cursor.execute('SET search_path = schema_name')
return cursor
Now the above code works fine for the application code that we have written but when I try access the detail screen of any object in the admin panel of django, I get the below error trace:
File "/Users/azharuddin.syed/Desktop/application/custom_db_engine/base.py", line 13, in \_cursor
cursor.execute('SET search_path = schema_name')
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self.\_execute_with_wrappers(sql, params, many=False, executor=self.\_execute)
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in \_execute_with_wrappers
return executor(sql, params, many, context)
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in \_execute
return self.cursor.execute(sql, params)
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/Users/azharuddin.syed/Desktop/application/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 82, in \_execute
return self.cursor.execute(sql)
django.db.utils.ProgrammingError: syntax error at or near "SET"
LINE 1: ...6171701248_sync_1" NO SCROLL CURSOR WITH HOLD FOR SET search...
If I understand correctly then two queries are getting mixed together and getting executed at the same time. Why is this the case?
P.S: I am aware that we can pass options parameter with the search path while defining the connections in django but this is not working in my case as the DB is behind a proxy. Any other solutions will be welcomed.

Django Test "database table is locked"

In my Django project, I have a view that when a user posts a zip file, it will respond immediately back and then process the data in the background with the help of threading. The view works fine in normal test, but when I run the Django's test it fails with a database table is locked error. Currently, I'm using default SQLite database and I know if I switch to another database this problem may be solved but I'm seeking an answer for the current setup. I trimmed the code for simplicity.
It seems that the problem is with writing in DeviceReportModel table. But I'm not sure why TestDeviceReport accessing it.
Model.py:
class DeviceReportModel(models.Model):
device_id = models.PositiveIntegerField(primary_key=True)
ip = models.GenericIPAddressField()
created_time = models.DateTimeField(default=timezone.now)
report_file = models.FileField(upload_to="DeviceReport")
device_datas = models.ManyToManyField(DeviceDataReportModel)
def __str__(self):
return str(self.id)
Serializers.py:
class DeviceReportSerializer(serializers.ModelSerializer):
class Meta:
model = DeviceReportModel
fields = '__all__'
read_only_fields = ('created_time', 'ip', 'device_datas')
views.py:
from django.utils import timezone
from django.core.files.base import ContentFile
from rest_framework.response import Response
from rest_framework import status, generics
import time
import threading
from queue import Queue
class DeviceReportHandler:
ReportQueue = Queue()
#staticmethod
def save_datas(device_object, request_ip, b64datas):
device_data_models = []
# ...
# process device_data_models
# this will take some time
time.sleep(10)
return device_data_models
#classmethod
def Check(cls):
while(True):
if not cls.ReportQueue.empty():
report = cls.ReportQueue.get()
# ...
report_model = DeviceReportModel(
device_id=report['device_object'], ip=report['request_ip'])
# THIS LINE GIVES ERROR
report_model.report_file.save(
"Report_{}.txt.gz".format(timezone.now()), ContentFile(report['report_data']))
device_data_models = cls.save_datas(
report['device_object'], report['request_ip'], 'SomeData')
report_model.device_datas.set(device_data_models)
report_model.save()
print("Report Handle Done")
time.sleep(.1)
#classmethod
def run(cls):
thr = threading.Thread(target=cls.Check)
thr.daemon = True
thr.start()
class DeviceReportView(generics.ListCreateAPIView):
queryset = DeviceReportModel.objects.all()
serializer_class = DeviceReportSerializer
DeviceReportHandler.run()
def post(self, request):
# ...
report = {
'device_object': 1,
'request_ip': '0.0.0.0',
'report_data': b'Some report plain data',
}
# add request to ReportQueue
DeviceReportHandler.ReportQueue.put(report)
return Response("OK", status.HTTP_201_CREATED)
tests.py:
from rest_framework.test import APITestCase
import gzip
from io import BytesIO
import base64
import time
class TestDeviceReport(APITestCase):
#classmethod
def setUpTestData(cls):
# add a new test device for other tests
pass
def generate_device_data(self):
# generate fake device data
return ""
def test_Report(self):
# generate device data
device_data = ''
for i in range(10):
device_data += self.generate_device_data() + '\n'
buf = BytesIO()
compressed = gzip.GzipFile(fileobj=buf, mode="wb")
compressed.write(device_data.encode())
compressed.close()
b64data = base64.b64encode(buf.getvalue()).decode()
data = {
"device_id": 1,
"report_data": b64data
}
response = self.client.post(
'/device/reports/', data=data, format='json')
print(response.status_code, response.content)
def tearDown(self):
# put some sleep to check whether the data has been processed
# see "Report Handle Done"
time.sleep(10)
And here is error log:
(myDjangoEnv) python manage.py test deviceApp.tests.tests.TestDeviceReport
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
201 b'"OK"'
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: database table is locked
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "<project_path>\deviceApp\views.py", line 303, in Check
"Report_{}.txt.gz".format(timezone.now()), ContentFile(report['report_data']))
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\fields\files.py", line 93, in save
self.instance.save()
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\base.py", line 779, in save_base
force_update, using, update_fields,
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\base.py", line 870, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\base.py", line 908, in _do_insert
using=using, raw=raw)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\query.py", line 1186, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\models\sql\compiler.py", line 1335, in execute_sql
cursor.execute(sql, params)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\Masoud\Anaconda3\envs\myDjangoEnv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: database table is locked
.
----------------------------------------------------------------------
Ran 1 test in 10.023s
OK
Destroying test database for alias 'default'...
Database is locked errors
SQLite is meant to be a lightweight database, and thus can’t support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released.
Python’s SQLite wrapper has a default timeout value that determines how long the second thread is allowed to wait on the lock before it times out and raises the OperationalError: database is locked error.
If you’re getting this error, you can solve it by:
Switching to another database backend. At a certain point SQLite becomes too “lite” for real-world applications, and these sorts of concurrency errors indicate you’ve reached that point.
Rewriting your code to reduce concurrency and ensure that database transactions are short-lived.
Increase the default timeout value by setting the timeout database option:
'OPTIONS': {
# ...
'timeout': 20,
# ...
}
This will make SQLite wait a bit longer before throwing “database is locked” errors; it won’t really do anything to solve them.
https://docs.djangoproject.com/en/3.0/ref/databases/#database-is-locked-errorsoption
Try using django.test.TransactionTestCase instead of TestCase

Why is table creation failing for an in-memory sqlite database?

I am trying to create an in-memory sqlite database using twisted.enterprise.adbapi.ConnectionPool.
Problem Description:
The following code works as expected:
#! /usr/bin/env python
from twisted.internet.task import react
from twisted.internet import defer
from twisted.enterprise.adbapi import ConnectionPool
sql_init = """
CREATE TABLE ajxp_changes ( seq INTEGER PRIMARY KEY AUTOINCREMENT, node_id NUMERIC, type TEXT, source TEXT, target TEXT, deleted_md5 TEXT );
CREATE TABLE ajxp_index ( node_id INTEGER PRIMARY KEY AUTOINCREMENT, node_path TEXT, bytesize NUMERIC, md5 TEXT, mtime NUMERIC, stat_result BLOB);
CREATE TRIGGER LOG_INSERT AFTER INSERT ON ajxp_index BEGIN INSERT INTO ajxp_changes (node_id,source,target,type) VALUES (new.node_id, "NULL", new.node_path, "create"); END;
"""
sql_insert = "INSERT INTO ajxp_index (node_path,bytesize,md5,mtime,stat_result) VALUES (?,?,?,?,?);"
sql_file_path = "/tmp/test.sqlite"
#react
#defer.inlineCallbacks
def main(reactor):
cp = ConnectionPool("sqlite3", sql_file_path, check_same_thread=False)
yield cp.runInteraction(lambda c, s: c.executescript(s), sql_init)
params = (
"/tmp/test.txt",
"64",
"5d41402abc4b2a76b9719d911017c592",
2832345,
"xxxxxx"
)
yield cp.runOperation(sql_insert, params)
However, replacing sql_file_path="/tmp/test.sqlite with sql_file_path=":memory:" suddenly causes the script to fail with the following traceback:
$ python test.py
main function encountered error
Traceback (most recent call last):
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/internet/defer.py", line 500, in errback
self._startRunCallbacks(fail)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/internet/defer.py", line 567, in _startRunCallbacks
self._runCallbacks()
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/internet/defer.py", line 1357, in gotResult
_inlineCallbacks(r, g, deferred)
--- <exception caught here> ---
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/failure.py", line 393, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "test.py", line 35, in main
yield cp.runOperation(sql_insert, params)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/enterprise/adbapi.py", line 477, in _runInteraction
compat.reraise(excValue, excTraceback)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/python/compat.py", line 467, in reraise
raise exception.with_traceback(traceback)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/enterprise/adbapi.py", line 467, in _runInteraction
result = interaction(trans, *args, **kw)
File "/Users/lthibault/.pyenv/versions/3.5.3/lib/python3.5/site-packages/twisted/enterprise/adbapi.py", line 486, in _runOperation
trans.execute(*args, **kw)
sqlite3.OperationalError: no such table: ajxp_index
What I have tried
1. Replicate in standard library
I first sought to determine whether the problem related to sqlite, or to twisted. To do so, I ran the following script, which behaves as expected.
#! /usr/bin/env python
import sqlite3
conn = sqlite3.connect(":memory:")
conn.executescript(sql_init)
conn.execute(
sql_insert,
("/tmp/test.txt", "64", "5d41402abc4b2a76b9719d911017c592", 2832345, "xxxxxx"),
)
Conclusion: The issue stems from twisted.enterprise.adbapi.ConnectionPool
2. Try using different ConnectionPool methods to run the INSERT statement.
Admittedly, I was grasping at straws at this point, but I figured the issue might stem from my use of runOperation. I decided to replicate the original example using runInteraction and runQuery.
The following replacements for yield cp.runOperation(sql_insert, params) also fail with an identical error.
yield cp.runInteraction(lambda c, s, p: c.execute(s), sql_insert, params)
yield cp.runQuery(sql_insert, params)
Importantly, changing the sqlite database path from :memory: to some path on persistent storage, both runInteraction and runQuery work as expected.
Conclusion: the problem has to do with using an in-memory sqlite database inside of Twisted.
Any ideas?
Okay, it turns out that under the hood, ConnectionPool is trying to connect to :memory: each time a query method is called, thus re-creating the database every time.
The solution seems to be to write a DB-API v.20 module that wraps sqlite3 and always hands back the same :memory: connection when its connect function is called.

Django savepoint rollback on catching an Integrity Error causes a TransactionManagementError

I am running the following code inside a transaction.atomic block in Django.
#transaction.atomic()
def test():
a.save()
sid = transaction.savepoint()
try:
b.save()
transaction.savepoint_commit(sid)
except IntegrityError as e:
transaction.savepoint_rollback(sid)
c.save()
This code gives me the following Error -
TransactionManagementError
An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
I followed the following link from the official documentation. https://docs.djangoproject.com/en/1.10/topics/db/transactions/#s-savepoint-rollback
What am I missing here?
EDIT:-
Adding the stacktrace.
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/models/query.py", line 401, in create
obj.save(force_insert=True, using=self.db)
File "/Users/vibhor/Documents/juggernaut/user-persistence-service/books/models/books.py", line 243, in save
transaction.savepoint_rollback(sid)
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/transaction.py", line 66, in savepoint_rollback
get_connection(using).savepoint_rollback(sid)
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/backends/base/base.py", line 328, in savepoint_rollback
self._savepoint_rollback(sid)
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/backends/base/base.py", line 288, in _savepoint_rollback
cursor.execute(self.ops.savepoint_rollback_sql(sid))
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/backends/utils.py", line 59, in execute
self.db.validate_no_broken_transaction()
File "/Users/vibhor/Documents/juggernaut/user-venv-new/lib/python2.7/site-packages/django/db/backends/base/base.py", line 429, in validate_no_broken_transaction
"An error occurred in the current transaction. You can't "
TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
I think you're running into this issue described in the documentation:
Savepoints may be used to recover from a database error by performing a partial rollback. If you’re doing this inside an atomic() block, the entire block will still be rolled back, because it doesn’t know you’ve handled the situation at a lower level! To prevent this, you can control the rollback behavior with the following functions...
As noted there, you probably want to do a transaction.set_rollback(False) to prevent the whole atomic block from being rolled back.
Now, is there a reason you're doing this manually? The code you posted could accomplish the same thing with a nested atomic block, and as the documentation notes:
When the atomic() decorator is nested, it creates a savepoint to allow partial commit or rollback. You’re strongly encouraged to use atomic() rather than the functions described below.
As explained in the documentation :
In order to guarantee atomicity, atomic disables some APIs. Attempting to commit, roll back, or change the autocommit state of the database connection within an atomic block will raise an exception.
The proper way of achieving what you want is to create another atomic block :
#transaction.atomic()
def test():
a.save()
try:
with transaction.atomic():
b.save()
except IntegrityError:
#handle exception here...
...
c.save()
#transaction.atomic()
def test():
a.save()
sid = transaction.savepoint()
try:
with transaction.atomic():
b.save()
transaction.savepoint_commit(sid)
except IntegrityError as e:
transaction.savepoint_rollback(sid)
c.save()
please check this

SQLAlchemy + postgres : (InternalError) current transaction is aborted, commands ignored until end of transaction block

I am attempting to save a parent/children set of records, and I want to wrap the inserts in a transaction. I am using SQLAlchemy with postgresql 8.4.
Here is a snippet of my code:
def insert_data(parent, child_rows):
# Start a transaction
conn = _get_connection()
tran = conn.begin()
try:
sql = get_sql_from_parent(parent)
res = conn.execute(sql) # <- Code barfs at this line
item = res.fetchone() if res else None
parent_id = item['id'] if ((item) and ('id' in item)) else -1
if parent_id == -1:
raise Exception('Parent could not be saved in database')
# Import children
for child in child_rows:
child_sql = get_child_sql(parent_id, child)
conn.execute(child_sql)
tran.commit()
except IntegrityError:
pass # rollback?
except Exception as e:
tran.rollback()
print "Exception in user code:"
print '-'*60
traceback.print_exc(file=sys.stdout)
print '-'*60
When I invoke the function, I get the following stacktrace:
Traceback (most recent call last):
File "import_data.py", line 125, in <module>
res = conn.execute(sql)
File "/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1405, in execute
params)
File "/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1582, in _execute_text
statement, parameters
File "/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1646, in _execute_context
context)
File "/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1639, in _execute_context
context)
File "/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 330, in do_execute
cursor.execute(statement, parameters)
InternalError: (InternalError) current transaction is aborted, commands ignored until end of transaction block
...
Does anyone know why I am getting this cryptic error - and how do I resolve it?
Can you activate the log query on postgresql ? (min_duration set to 0 in postgresql.conf then restart).
Then look at your postgresql logs to debug it.

Categories

Resources