I am missing the table name in IntegrityError of Django:
Traceback (most recent call last):
...
return self.cursor.execute(sql, params)
File ".../django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File ".../django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
IntegrityError: null value in column "manager_slug" violates not-null constraint
DETAIL: Failing row contains (17485, null, 2017-10-10 09:32:19, , 306).
Is there a way to see which table the INSERT/UPDATE is accessing?
We use PostgreSQL 9.6.
This is a generic question: How to get a better error message?
This is not a question about this particular column. I found the relevant table and column very soon. But I want to improve the error message which is from our CI system. The next time I want to see the table name immediately.
I know that I could reveal the missing information easily with a debugger if I see this error during software development. But in my case this happens in production, and I have only the stacktrace like above.
The exception message in this traceback is the original message from the database driver. It is useful to know this and the traceback if anything is googled, reported etc.
The exception class is the same django.db.utils.IntegrityError for all backends, but the message or rather arguments depend on the backend:
postgres: null value in column "manager_slug" violates not-null constraint\n DETAILS...\n
mysql . . : (1048, "Column 'manager_slug' cannot be null")
sqlite3 . : NOT NULL constraint failed: appname_modelname.manager_slug
The table name is visible only with sqlite3 backend. Some backends use only a string argument of exception, but mysql uses two arguments: a numeric error code and a message. (I like to accept that it is a general question, not only PostgreSQL.) Authors of some backends expect that the author of the app will know the table name directly or from SQL, but it is not true with general ORM packages. There is no preferable and generally acceptable way, how to extend the message even if it can be done technically perfect.
Development and debugging are easy:
Much additional information is available in DEBUG mode in development ("SQL" in the last frame or a class name of an object on a line like "myobj.save()")
python manage.py test --debug-sql: "Prints logged SQL queries on failure."
The same error in development/tests with sqlite3 is easier readable.
...but you are probably asking for a run-time error in production.
I guess about your possible intention in a so general question, what direction could be interesting for you.
A) The most important information from the traceback is usually a few lines above the many lines with ".../django/db/...". It is perfectly easy for a guru. It can be used very probably if the code is not so dynamic and general like a Django admin site, where no code near myobj.save() call (neither in parent frames) contains an explicit model name. Example:
# skip some initial universal code in ".../django/..."
...
# our apps start to be interesting... (maybe other installed app)
...
# START HERE: Open this line in the editor. If the function is universal, jump to the previous.
File ".../me/app/...py", line 47, in my...
my_obj.save()
# skip many stack frames .../django/db/... below
File ".../django/db/models/base.py", line 734, in save
# self.save_base(... # this line 733 is not visible
force_update=force_update, update_fields=update_fields)
...
# interesting only sql and params, but not visible in production
File ".../django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
IntegrityError (or DataError similarly)...
B) Catch the information by a common ancestor of your models
class ...(models.Model):
def save(self, *args, **wkargs):
try:
super(..., self).save(*args, **wkargs)
except django.db.utils.IntegrityError as exc:
new_message = 'table {}'.format(self._meta.db_table)
exc.extra_info = new_message
# this is less compatible, but it doesn't require additional reading support
# exc.args = exc.args + (new_message,)
reraise
This could complicate debugging with multiple inheritance.
C) An implementation in Django db would be better, but I can not imagine that it will be accepted and not reverted after some issue.
if you can create sql function you can try:
create function to get last sequence value get_sequence_last_value (original post)
CREATE FUNCTION public.get_sequence_last_value(name) RETURNS int4 AS '
DECLARE
ls_sequence ALIAS FOR $1;
lr_record RECORD;
li_return INT4;
BEGIN
FOR lr_record IN EXECUTE ''SELECT last_value FROM '' || ls_sequence LOOP
li_return := lr_record.last_value;
END LOOP;
RETURN li_return;
END;' LANGUAGE 'plpgsql' VOLATILE;
after it get table with sequence more then in error stack, and has column manager_slug
SELECT table_name, column_name
FROM information_schema.columns
WHERE table_name in (
SELECT table_name
FROM (
SELECT table_name,
get_sequence_last_value(
substr(column_default, 10, strpos(column_default, '::regclass') - 11)
) as lv
FROM information_schema.columns
WHERE column_default LIKE 'nextval%'
) as t_seq_lv
WHERE lv > 17485
)
AND column_name = 'manager_slug';
i understand that the solution not full, but any way i hope it can help you
I would suggest to use Sentry (https://sentry.io/welcome/). In Sentry Issues you can observe all local variables for all parts of a stack trace.
The best solution that I have found to your problem is overwriting DataBaseErrorWrapper method, to do that go to \django\db\utils.py and in the line 86 replace dj_exc_value = dj_exc_type(*exc_value.args) for:
if exec_value.diag:
a, b = exc_value.args + ("In the table '%s'" % exc_value.diag.table_name,)
dj_exc_value = dj_exc_type(a + b)
else:
dj_exc_value = dj_exc_type(*exc_value.args)
and if the IntegrityError appears, the message should work
django.db.utils.IntegrityError: null value in column "signature" violates not-null constraint
DETAIL: Failing row contains (89, null).
In the table 'teachers_signature'
I am not sure but this should work with these exceptions:
DataError
OperationalError
IntegrityError
InternalError
ProgrammingError
NotSupportedError
It works for me, tell me if works to you. Remember edit the file with the Django folder that you are working
You miss set value at column "manager_slug". You cant set NULL value at this column. You should set value or remove not-null condition.
IntegrityError: null value in column "manager_slug" violates not-null constraint
Is there a way to see which table the INSERT/UPDATE is accessing?
If you're running migrate, you could just visit up the stdout and check which migration is being executed. Then you can open the migration file and have a better overview of the issue.
If you don't i guess you want to take a look at PEP249 for optional error handling. as django database wrappper is based on PEP249 specification.
References on django code within DatabaseErrorWrapper
2nd EDIT:
you could catch the integrity error and access the .messages attribute from the database wrapper.
pseudo example:
try:
# operation on database
except IntegrityError as ie:
print(ie.wrapper.cursor.messages[:])
I tend to use as less dependence as possible and leave 3rd party libs as they are.
Most of the time, i would log every sql insert/update/delete query. In this way, i can identify which query went wrong easily without extra effort. This allows me to track who did what at when in my system.
That happens to meet part of regulatory requirement on actions tracking in my industry.
Related
I have a Flask application that uses SQLAlchemy (with some Marshmallow for serialization and deserialization).
I'm currently encountering some intermittent issues when trying to dump an object post-commit.
To give an example, let's say I have implemented a (multi-tenant) system for tracking system faults of some sort. This information is contained in a fault table:
class Fault(Base):
__tablename__ = "fault"
fault_id = Column(BIGINT, primary_key=True)
workspace_id = Column(Integer, ForeignKey('workspace.workspace_id'))
local_fault_id = Column(Integer)
name = Column(String)
description = Column(String)
I've removed a number of columns in the interest of simplicity, but this is the core of the model. The columns should be largely self explanatory, with workspace_id effectively representing tenant, and local_fault_id representing a tenant-specific fault sequence number, which is handled via a separate fault_sequence table.
This fault_sequence table holds a counter against workspace, and is updated by means of a simple on_fault_created() function that is executed by a trigger:
CREATE TRIGGER fault_created
AFTER INSERT
ON "fault"
FOR EACH ROW
EXECUTE PROCEDURE on_fault_created();
So - the problem:
I have a Flask endpoint for fault creation, where we create an instance of a Fault entity, add this via a scoped session (session.add(fault)), then call session.commit().
It seems that this is always successful in creating the desired entities in the database, executing the sequence update trigger etc. However, when I then try to interrogate the fault object for updated fields (after commit()), around 10% of the time I find that each key/field just points to an Exception:
psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block
Which seems to boil down to the following:
(psycopg2.errors.InvalidTextRepresentation) invalid input syntax for integer: ""
[SQL: SELECT fault.fault_id AS fault_fault_id, fault.workspace_id AS fault_workspace_id, fault.local_fault_id AS fault_local_fault_id, fault.name as fault_name, fault.description as fault_description
FROM fault
WHERE fault.fault_id = %(param_1)s]
[parameters: {'param_1': 166}]
(Background on this error at: http://sqlalche.me/e/13/2j8
My question, then, is what do we think could be causing this?
I think it smells like a race condition, with the update trigger not being complete before SQLAlchemy has tried to get the updated data; perhaps local_fault_id is null, and this is resulting in the invalid input syntax error.
That said, I have very low confidence on this. Any guidance here would be amazing, as I could really do with retrieving that sequence number that's incremented/handled by the update trigger.
Thanks
Edit 1:
Some more info:
I have tried removing the update trigger, in the hope of eliminating that as a suspect. This behaviour is still intermittently evident, so I don't think it's related to that.
I have tried adopting usage of flush and refresh before the commit, and this allows me to get the values that I need - though commit still appears to 'break' the fault object.
Edit 2:
So it really seems to be more postgres than anything else. When I interrogate my database logs, this is the weirdest thing. I can copy and paste the command it says is failing, and I struggle to see how this integer value in the WHERE clause is possibly evaluating to an empty string.
This same error is reproducible with SELECT ... FROM fault WHERE fault.fault_id = '', which in no way seems to be the query making to the DB.
I am stumped.
Your sentence "This same error is reproducible with SELECT ... FROM fault WHERE fault.fault_id = '', which in no way seems to be the query making to the DB." seems to indicate that you are trying to access an object that does not have the database primary key "fault_id".
I guess, given that you did not provide the code, that you are adding the object to your session (session.add), committing (session.commit) and then using the object. As fault_id is autogenerated by the database, the fault object in the session (in memory) does not have fault_id.
I believe you can correct this with:
session.add(fault)
session.commit()
session.refresh(fault)
The refresh needs to be AFTER commit to refresh the fault object and retrieve fault_id.
If you are using async, you need
session.add(fault)
await session.commit()
await session.refresh(fault)
I'm trying to get a list of warnings after a mySQL query, using Django admin. I can see from the documentation here that it's possible to record warnings by setting connection.get_warnings to true. But I can't find anything explaining how to read those warnings.
I do not want to throw an exception - I am deleting items from the database using a DELETE IGNORE statement and want to get all instances of deletions that failed (due to external keys, etc.)
I've tried returning the result of the execute function itself (just gave me a number) and calling fetchwarnings() on the cursor (threw a "Cursor object has no attribute fetchwarnings" error).
I'm still new to both Python and Django. I'm looking through all the documentation I can find but can't find anything that works.
from django.db import connection
query = "{query here}"
connection.get_warnings = True
with connection.cursor() as cursor:
cursor.execute(query) <-- Returns a number
return cursor.fetchwarnings() <-- Throws an error
Using Python and SQlite3 where c is a cursor this code...
print("vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv")
print("SQL and parameters:",sql,parm)
c.execute(sql,parm)
# Get the row
print("Executed OK")
response = c.fetchone()
# If not successful return null
if not response:
return None
#
print("and produced ", response)
print("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^")
give this output:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
SQL and parameters: select * from Links where LinkNum = ? (301,)
Executed OK
and produced (301, 'Index', 'The Independent', 'https://www.independent.co.uk/', 6, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
SQL and parameters: select * from Links where LinkNum = ? (301,)
Encountered exception of type ProgrammingError with arguments ('Incorrect number of bindings supplied. The current statement uses 1, and there are 6 supplied.',)
The application will close
Two identical statements. 1 works and the next throws the exception. As can be seen the row I'm trying to retrieve has 6 columns but that's the only hint/clue I can see. Can anyone help with tracking down the problem? Thanks.
Whatever was causing SQLite to have a fit I remedied the problem by retrieving the second row/object out side the Links object and passing it as an argument to the method I was calling, rather than have that method attempt to retrieve the object.
The problem would still be there but must be deep within the internals of Python instantiation and/or SQLite. Whatever, this problem is solved with some less fancy code.
What's the proper way to recover from an IntegrityError, or any other errors that could leave my transactions screwed up without using manual transaction control?
In my application, I'm running into problems with IntegrityErrors that I want to recover from, that screw up later database activity, leaving me with:
DatabaseError: current transaction is aborted, commands ignored until end of transaction block`
for all database activity after ignoring IntegrityErrors.
This block of code should reproduce the error I'm seeing
from django.db import transaction
try:
MyModel.save() # Do a bad save that will raise IntegrityError
except IntegrityError:
pass
MyModel.objects.all() # raises DatabaseError: current transaction is aborted, commands ignored until end of transaction block
According to the docs, the solution to recover from an IntegrityError is by rolling back the transaction. But the following code results in a TransactionManagementError.
from django.db import transaction
try:
MyModel.save()
except IntegrityError:
transaction.rollback() # raises TransactionManagementError: This code isn't under transaction management
MyModel.objects.all() # Should work
EDIT: I'm confused by the message from the TransactionManagementError, because if in my except I do a:
connection._cursor().connection.rollback()
instead of the django transaction.rollback(), the MyModel.objects.all() succeeds, which doesn't make sense if my code "isn't under transaction management". It also doesn't make sense that code that isn't under transaction management (which I assume means it's using autocommit), can have transactions that span multiple queries.
EDIT #2: I'm aware of using manual transaction control to be able to recover from these errors, but shouldn't I be able to recover without manual transaction control? My understanding is that if I'm using autocommit, there should only be one write per transaction, so it should not affect later database activity.
EDIT #3: This is a couple years later, but in django 1.4 (not sure about later versions), another issue here was that Model.objects.bulk_create() doesn't honor autocommit behavior.
Versions:
Django: 1.4 (TransactionMiddleWare is not enabled)
Python: 2.7
Postgres: 9.1
Django's default commit mode is AutoCommit. In order to do rollback, you need to wrap the code doing the work in a transaction. [docs]
with transaction.commit_on_success():
# Your code here. Errors will auto-rollback.
To get database level autocommit, you will require the following option in your DATABASES settings dictionary.
'OPTIONS': {'autocommit': True,}
Alternately, you can use explicit savepoints to roll back to. [docs]
#transaction.commit_manually
def viewfunc(request):
a.save()
# open transaction now contains a.save()
sid = transaction.savepoint()
b.save()
# open transaction now contains a.save() and b.save()
if want_to_keep_b:
transaction.savepoint_commit(sid)
# open transaction still contains a.save() and b.save()
else:
transaction.savepoint_rollback(sid)
# open transaction now contains only a.save()
transaction.commit()
I have a database table with UNIQUE key. If I want to insert some record there are two possible ways. First, the unique item doesn't exist yet, that's OK, just return new id. Second, the item already exists and I need to get the id of this unique record.
The problem is, that anything I try, I get always some exception.
Here's example of the code:
def __init__(self, host, user, password, database):
# set basic attributes
super().__init__(host, user, password, database)
#open connection
try:
self.__cnx = mysql.connector.connect(
database=database, user=user, password=password, host = host)
#self.__cursor = self.__cnx.cursor()
except ...
def insert_domain(self, domain):
insertq = "INSERT INTO `sp_domains` (`domain`) VALUES ('{0}')".format(domain)
cursor = self.__cnx.cursor()
try:
cursor.execute(insertq)
print("unique")
except (mysql.connector.errors.IntegrityError) as err:
self.__cnx.commit()
print("duplicate")
s = "SELECT `domain_id` FROM `sp_domains` WHERE `domain` = '{0}';".format(domain)
try:
id = cursor.execute(s).fetchone()[0]
except AttributeError as err:
print("Unable to execute the query:", err, file=sys.stderr)
except mysql.connector.errors.ProgrammingError as err:
print("Query syntax error:", err, file=sys.stderr)
else:
self.__cnx.commit()
cursor.close()
but anyting I try, on the first duplicate record I get either 'MySQL Connection not available', 'Unread result'. The code is just example to demonstrate it.
This is my first program using Connector/python, so I don't know all the rules, about fetch the results, commiting queries and so on.
Could anyone help me with this issue, please? Or is there any efficient way to such task ('cause this one seems to be not the best solution to me). Thank you for any advice.
I can't fix your code, because you've given us two different versions of the code and two partially-described errors without full information, but I can tell you how to get started.
From a comment:
In previous version it was type error I guess, something like "NoneType has no attribute 'fetchone'.
Looking at your code, the only place you call fetchone is here:
id = cursor.execute(s).fetchone()[0]
So obviously, cursor.execute(s) returned None. Why did it return None? Well, that's what it's supposed to return, according to the documentation.*
What you want to do is:
cursor.execute(s)
id = cursor.fetchone()[0]
… as all of the sample code does.
And for future reference, it's a lot easier to debug an error like this if you first note which line it happens on instead of throwing away the traceback, and then breaking that line into pieces and logging the intermediate values. Usually, you'll find one that isn't what you expected, and the problem will be much more obvious at that point, then three steps later on when you get a bizarre exceptions.
* Technically, the documentation just says that "Return values are not defined" for cursor.execute, so it would be perfectly legal for a DB-API module to return self here. Then again, it would also be legal to return some object that erases your hard drive when you call a method on it.