SQLAlchemy IntegrityError - python

I'm having a problem using SQLAlchemy with PySide(PyQt). I'm trying to pop-up a QtGui.QDialog, but when I do this SQLAlchemy throws an exception:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\preo\preodb\dbviewandmodel.py", line 32, in rowCount
return len(self.rows())
File "C:\Python27\lib\site-packages\preo\preodb\dbviewandmodel.py", line 30, in rows
return self.tableobj.query.all()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\query.py", line 1579, in all
return list(self)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\query.py", line 1688, in __iter__
self.session._autoflush()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 862, in _autoflush
self.flush()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 1388, in flush
self._flush(objects)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 1469, in _flush
flush_context.execute()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\unitofwork.py", line 302, in execute
rec.execute(self)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\unitofwork.py", line 446, in execute
uow
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\mapper.py", line 1878, in _save_obj
execute(statement, params)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1191, in execute
params)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1271, in _execute_clauseelement
return self.__execute_context(context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1302, in __execute_context
context.parameters[0], context=context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1401, in _cursor_execute
context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1394, in _cursor_execute
context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\default.py", line 299, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (IntegrityError) ('23000', "[23000] [Microsoft][ODBC
SQL Server Driver][SQL Server]Violation of UNIQUE KEY
constraint 'UQ__users__F3DBC5720DAF0CB0'. Cannot insert duplicate key in
object 'dbo.users'. (2627) (SQLExecDirectW); [01000] [Microsoft][ODBC SQL Server
Driver][SQL Server]The statement has been terminated. (3621)") u'INSERT INTO users
(username, fullname, email, passwordmd5) OUTPUT inserted.id VALUES (?, ?, ?, ?)'
(None, None, None, None)
This is particularly troubling because I have no code, anywhere, that even attempts to insert records into SQL; I am only ever attempting to query data from the database. In fact, my DB model is read-only with respect to what PySide/PyQt are doing (i.e., I'm using a QtGui.QTableView model/view and there is no insertRows function in that model).
I have no idea what's going on or how to solve it - again, I have no code to modify SQL records at all, but still SQLAlchemy attempts to be inserting blank records into one of my SQL tables. All I can see, in the background, is the QTableView data model is querying the database A LOT. It just seems that when I popup this QDialog (which does have some code in it to query some table column) this error is thrown. Oddly, this isn't consistent, sometime the popup appears first before the exception, sometimes the popup appears after the exception. Under normal circumstances, the QTableView data model works great, just not when I popup this dialog (and ironically, the popup isn't using any QTableView at all, just standard widgets like QLineEdit, QTextEdit, etc.)
If it helps, I'm using Python 2.7 with SQLAlchemy 0.6.6 (also with Elixir 0.7.1), and PySide 1.0.0 (and PyQt4 4.8.3). I'm on Windows 7 using SQL 2008 R2 (Express). And yes, I've tried rebooting the PC, but the problem still occurs after a reboot. I'm reluctant to post more code because I have a lot of it in this particular project and I can't nail down the problem anything specific.
I'm hoping someone might know of oddities in SQLAlchemy and/or PyQt that might be related to this. I'm also hoping I can continue using SQLAlchemy as I have a large data model built; I'm reluctant, at this point, to abandon this and use PyQt's SQL features.

I've managed to make this problem go away, but it's still not really clear to me why SQLAlchemy was trying to insert rows in my database - that really bothers me, but it's not happening anymore.
At any rate, what was, I think, happening, was related to my SQLAlchemy data model and the way I was accessing it, here is a snippet of that model:
from elixir import *
metadata.bind = 'mssql+pyodbc://username:password/dbname'
metadata.bind.echo = False
class Users(Entity):
using_options(tablename = 'users')
username = Field(String(50), unique=True)
fullname = Field(String(255))
email = Field(String(255))
passwordmd5 = Field(String(32))
def __repr__(self):
return "<Users ({})({})({})>".format(self.username, self.fullname, self.email)
def prettyname(self):
return {'username':'User Name', 'fullname':'Full Name', 'email':'Email Address', 'passwordmd5':'$hidden$'}
In my code I needed a way of getting 'pretty' label names for a GUI without having to hard code this in a GUI (I've been trying to create a dynamic way of building GUI forms). So, I added the 'prettyname' method to my data model to give me some application specific metadata in that data model. All I'm doing is returning a dictionary of items.
I had a secondary problem in that sometimes I needed to get this data from the class instance for Users and sometimes for a query result for Users (for example, Users.get_by(id=1)). As it turned out, retrieving this data had to be done in two ways. In the class instances I had to get the value this way:
prettyname = Users().prettyname()['username']
But when I was using query results it was:
prettyname = queryresult.prettyname()['username']
SQLAlchemy seems to have a real problem when I was using the former method (the class instance method) - as this was being used everytime I was seeing the crash. When I was using the latter instance I was never seeing a crash. Still, I needed access to that metadata in the class instance.
The fix, or should I say what turned out to fix this came from another Stackoverflow article (thank you everyone at Stackoverflow - I'd be nothing without you). I changed the structure of the dbmodel:
class Users(Entity):
using_options(tablename = 'users')
username = Field(String(50), unique=True, info={'prettyname':'User Name'})
fullname = Field(String(255), info={'prettyname':'Full Name'})
email = Field(String(255), info={'prettyname':'Email Address'})
passwordmd5 = Field(String(32), info={'hidden':True})
def __repr__(self):
return "<Users ({})({})({})>".format(self.username, self.fullname, self.email)
This allows me to use a common method of introspection to get the dictionary data in the info argument, regardless if I'm looking at a class instance, or a query result. In this case I use the '.table' method of either the class or query result, then get the column that I need (.c), then use the .info method of that column to return the dictionary.
Whatever the case, now SQLAlchemy no longer tries to arbitrarily insert rows in the database anymore.

Related

SQLServer: query runs on console but not on SQLAlchemy

I'm trying to run a very expensive SQL Server query via Python + SQLAlchemy. It runs just fine on sql server console but it errors out when called via sqlalchemy.
Test run looks like this:
Run query on SQL Server console.
Wait about 15 minutes for it to finish.
Query runs just fine and returns ~50,000 rows.
When running the same query using Python + SQLAlchemy, it looks like this:
Run query.
Wait a long time.
Code errors out and throws a misleading error stating that the query did not return any rows.
I am positive that this error message cannot possibly be right, because I have tested the same query on console and it runs just fine and returns A LOT of rows. Does anyone know what is really happening here?
Query looks like this:
USE DB_NAME;
DROP TABLE IF EXISTS #TB_1;
CREATE TABLE #TB_1 (FIELD_1 BIGINT, FIELD_2 BIGINT);
INSERT INTO #TB_1 VALUES (1, 5), (2, 6), (3, 7);
--------------------------------------------------
DROP TABLE IF EXISTS #TB_2;
SELECT * INTO #TB_2 FROM (
SELECT DISTINCT FIELD_1, FIELD_2
FROM dbo.PRIMARY_TABLE_1 (NOLOCK)
WHERE FIELD_1 IN (SELECT * FROM #TB_1)
) AS TB_2;
--------------------------------------------------
SELECT FIELD_1, FIELD_2 FROM #TB_1
UNION ALL
SELECT FIELD_1, FIELD_2 FROM #TB_2
Code looks like this:
from sqlalchemy.engine import create_engine
engine = create_engine(SQLServer_URI)
with engine.connect() as connection:
connection.execute(huge_query).fetchall()
Here's the Stack Trace:
Traceback (most recent call last):
...
File "path-to-project/src/etl/ETL.py", line 48, in extract
raw_data = connection.execute(query).fetchall()
File "path-to-project\venv\lib\site-packages\sqlalchemy\engine\result.py", line 984, in fetchall
return self._allrows()
File "path-to-project\venv\lib\site-packages\sqlalchemy\engine\result.py", line 398, in _allrows
make_row = self._row_getter
File "path-to-project\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 1160, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "path-to-project\venv\lib\site-packages\sqlalchemy\engine\result.py", line 319, in _row_getter
keymap = metadata._keymap
File "path-to-project\venv\lib\site-packages\sqlalchemy\engine\cursor.py", line 1197, in _keymap
self._we_dont_return_rows()
File "path-to-project\venv\lib\site-packages\sqlalchemy\engine\cursor.py", line 1178, in _we_dont_return_rows
util.raise_(
File "path-to-project\venv\lib\site-packages\sqlalchemy\util\compat.py", line 211, in raise_
raise exception
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
Yep adding "SET NOCOUNT ON;" as per Gord Thompson's comment, to the front of my query solved it for me. I got the same error running a stored procedure containing multiple queries. So this will do the trick:
data = pd.read_sql("SET NOCOUNT ON; Your SQL Query.....
This answer https://stackoverflow.com/a/55597613/11692538 helped solve it and explain it for me. Basically sqlalchemy (or pyodbc) reads any messages sent in the execution of a query as the result, hence why you get the error "name": "ResourceClosedError", "message": "This result object does not return rows. It has been closed automatically." So setting NOCOUNT prevents sql server sending back count messages during the execution of the query(s).

Error: trying to redefine a primary key as non-primary key

I'm using the dataset library to attempt to back up a postgres database into an sqlite file. The code I'm running goes as follows:
local_db = "sqlite:///backup_file.db"
with dataset.connect(local_db) as save_to:
with dataset.connect(postgres_db) as download_from:
for row in download_from['outlook']:
save_to['outlook'].insert(row)
If I print one row of the table, it looks like this:
OrderedDict([
('id', 4400),
('first_sighting', '2014-08-31'),
('route', None),
('sighted_by', None),
('date', None)
])
However, when I get to the line save_to['outlook'].insert(row) I get an error with the following stack trace:
Traceback (most recent call last):
File "/home/anton/Development/Python/TTC/backup_db.py", line 25, in <module>
save_to['outlook'].insert(dict(row))
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 79, in insert
row = self._sync_columns(row, ensure, types=types)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 278, in _sync_columns
self._sync_table(sync_columns)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 245, in _sync_table
self._table.append_column(column)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 681, in append_column
column._set_parent_with_dispatch(self)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/base.py", line 431, in _set_parent_with_dispatch
self._set_parent(parent)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 1344, in _set_parent
self.key, table.fullname))
sqlalchemy.exc.ArgumentError: Trying to redefine primary-key column 'id' as a non-primary-key column on table 'outlook'
Any ideas as to what I'm doing wrong? I've tried this in python 2.7.14 and 3.6.3
Assuming you have a schema and table made for "outlook", did you make a PK field? Did you let sqlite decide which field to make a PK field?
It is highly that you are trying to insert id twice. Once, sqlite is inserting itself, and other comes from the other table records.
I figured it out! So, the trick is that by default the database library makes tables with an auto-incrementing integer primary-key. But, my data already has an 'id' column. In order to avoid this problem, I should define my table before I try to add lines to it, and define it with no primary key as follows:
with dataset.connect(local_db) as save_to:
with dataset.connect(postgres_db) as download_from:
table_to_save_to = save_to.create_table('outlook', primary_id=False)
for row in download_from['outlook']:
table_to_save_to.insert(row)
By doing .create_table(table_name, primary_key=False) I can make sure that i can insert my own id values into the table.
I found this solution by reading the docs.

define_table does not create table in database

I am running define_tables in the recommended way:
db = DAL('postgres://user:XXXX#localhost:5432/mydb', migrate_enabled=False, auto_import=False, lazy_tables=True)
db.define_table('auth_user',
Field('email', unique=True),
Field('password', length=512, type='password', readable=False, label='Password'),
...)
This gets executed without errors, but no table is created in the database. Whenever I try to insert a new user:
relation "auth_user" does not exist
What can be going on? Once the tables are created (manually, for example), the application works fine. I am using a postgres backend. This happens no matter what value I give to lazy_tables
EDIT
This is the full test script:
from gluon import DAL
from gluon import Field
db = DAL('postgres://user:pass#localhost:5432/mydb', migrate_enabled=False)
db.define_table(
'auth_user',
Field('email', type='string', unique=True),
Field('password', type='password'),
Field('registration_key', type='string', length=512, writable=False, readable=False, default=''),
Field('reset_password_key', type='string', length=512, writable=False, readable=False, default=''),
Field('registration_id', type='string', length=512, writable=False, readable=False, default=''),
)
db.commit()
print db.tables
db.auth_user.insert(email='g#b.c')
And I get the following output:
['auth_user']
Traceback (most recent call last):
File "xxx.py", line 19, in <module>
db.auth_user.insert(email='g#b.c')
File "/tmp/web2py/gluon/dal.py", line 9293, in insert
ret = self._db._adapter.insert(self, self._listify(fields))
File "/tmp/web2py/gluon/dal.py", line 1361, in insert
raise e
psycopg2.ProgrammingError: relation "auth_user" does not exist
LINE 1: INSERT INTO auth_user(reset_password_key,registration_id,reg...
The table is somehow "created" (in memory?), but it is not really in the postgres database. What does this mean?
Simply remove migrate_enabled=False, which turns off migrations and therefore prevents the creation or modification of database tables. There is also no need to explicitly set auto_import=False as that is already the default.
If the above doesn't help, it is possible that web2py did successfully create such a table previously and it was removed without web2py knowing about it. If the application's /databases folder includes a file with a name like *_auth_user.table, delete that file and try again.
If that's not the issue, check the /databases/sql.log file and confirm that web2py attempted to create the table. Most likely, something in your system configuration is preventing the table from being created.
UPDATE: From your edit, it appears you are using the DAL outside of a web2py application. Because you have not specified the folder argument to the DAL() constructor, it will save the *.table migration files in the current working directory, and it will not create a sql.log file. In this case, it is best to create a separate folder for the migration and log files:
DAL('postgres://user:pass#localhost:5432/mydb', folder='/path/to/folder')
In that case, it will save all of the *.table migration files and the sql.log file in the specified folder.

SQLAlchemy: successful insertion but then raises an exception

I am running SQLAlchemy against FirebirdSQL, and when I execute an insert command in my project, SQLAlchemy is raising an exception on returning from executing against the connection. However, the insert query is being constructed and executed successfully. Querying the database shows that the items are actually being inserted correctly.
Edit: I'm digging down into the fbcore.py module now, and checking the value of value and vartype indicates that the issue is probably how the SEQUENCE item used to generate the primary key ID is returning its data is at issue. The vartype is SQL_LONG, but the actual value is [<an integer>] where <an integer> is the value returned by a sequence generator I created to auto-increment the primary key (e.g. [14]). This suggests to me that the problem should be resolved by fixing that, though I'm not sure how to do it. The generator appears to be working correctly within the database itself, but causing problems when returned to SQLAlchemy.
See below for my existing implementation and the stack trace for details.
My code:
class Project:
# (I've snipped project instantiation, where engine connection, table, etc. are configured)
def save_project(self, id_=None, title=None, file_name=None, file_location=None):
# Build the dictionary of values to store
values = {}
if title is not None:
values['title'] = title
if file_name is not None:
values['file_name'] = file_name
if file_location is not None:
values['file_location'] = file_location
# Simplification: I account for the case that there *is* data---skipping that here
# Execute the correct kind of statement: insert or settings_update.
if id_ is None:
statement = self.table.insert()
else:
statement = self.table.update().where(self.table.c.id == id_)
result = self.connection.execute(statement, values)
# If we inserted a row, get the new primary key. Otherwise, return
# the one specified by the user; it does not change on settings_update.
project_id = result.inserted_primary_key if result.is_insert else id_
The traceback:
File "/Users/chris/development/quest/workspace/my_project/data/tables.py", line 350, in save_project
result = self.connection.execute(statement, values)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 720, in execute
return meth(self, multiparams, params)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/sql/elements.py", line 317, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 817, in _execute_clauseelement
compiled_sql, distilled_params
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 947, in _execute_context
context)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 1111, in _handle_dbapi_exception
util.reraise(*exc_info)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/util/compat.py", line 168, in reraise
raise value
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 940, in _execute_context
context)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/sqlalchemy/dialects/firebird/kinterbasdb.py", line 106, in do_execute
cursor.execute(statement, parameters or [])
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/fdb/fbcore.py", line 3323, in execute
self._ps._execute(parameters)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/fdb/fbcore.py", line 2991, in _execute
self.__Tuple2XSQLDA(self._in_sqlda, parameters)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/fdb/fbcore.py", line 2782, in __Tuple2XSQLDA
sqlvar.sqlscale)
File "/Users/chris/.virtualenvs/my_project/lib/python3.3/site-packages/fdb/fbcore.py", line 2266, in _check_integer_range
if (value < vmin) or (value > vmax):
TypeError: unorderable types: list() < int()
I'm not yet sufficiently familiar with SQLAlchemy's to see why this is an issue; the style of my statement is pretty much identical to that in the tutorial. This appears to be an issue with how the parameters are being passed – possibly something about using a dict rather than keyword arguments? But there's nothing in the docs on how to handle parameters that suggests I have anything amiss here – it seems right from what I'm seeing there.
I've also tried this with self.table.insert().values(values) rather than passing the values term to the execute method, with the same results (as I would expect).
Edit: I note from reading the docstring on execute in fbcore.py that it raises a TypeError when the parameters passed to the method are not given either as a list or a tuple. Is this a change that is not yet reflected in the documentation?
Edit 2: As a comment notes, the stack trace indicates that it's running against the kinterbasdb driver, though I have explicitly configured the engine to run using fdb. This is also confusing to me.
As I might have expected, especially once I discovered that the issue was that the row was being inserted as expected but then called with an UPDATE function shortly after, the problem was some related code. I was returning the result as project_id (as you can see in the code above), and for an entirely unrelated reason (having to do with Blinker signals) the method was getting called again, with the returned value of project_id, which I had set thus:
project_id = result.inserted_primary_key if result.is_insert else id_
The correct version of this line is only slightly different:
project_id = result.inserted_primary_key[0] if result.is_insert else id_
From the SQLAlchemy docs (emphasis mine):
Return the primary key for the row just inserted.
The return value is a list of scalar values corresponding to the list of primary key columns in the target table.
The return value here has to be a list because primary keys can be a combination of more than one field in the database. (This should have been obvious to me; it's obvious I haven't done serious database work in over a year.) Since the primary key in this case is a single value, I just chose that value and returned it, and the problem is resolved.
Of course, now I have to go hunt down that Blinker signal issue—this method shouldn't be getting called twice—but c'est la vie...
I have been going over the SQL Alchemy documentation, and I am wondering if you should be doing:
if id_ is None:
statement = self.table.insert()
else:
statement = self.table.update().where(self.table.c.id == id_)
statement = statement.values(title=title, file_name=file_name, file_location=file_location)
result = self.connection.execute(statement)
That is: instead of passing the dictionary to the execute, make it part of the statement (as shown by the Insert Expressions).

Django's count fail in Oracle

I'm trying to make a query to a oracle database with this model:
class FCSTrunkValidation(Validation):
card_transaction = models.CharField(max_length=10, db_column='card_trnsctn_seq', primary_key=True)
card_number = models.CharField(max_length=16, db_column='card_num')
use_date = models.CharField(max_length=14, db_column='use_date')
device = models.ForeignKey('TrunkDevice', db_column='device_id')
agency_id = models.CharField(max_length=3, db_column='agency_id')
And this query:
# day is a datetime object
qs = FCSTrunkValidation.oracle_objects.all()
qs = qs.filter(use_date__startswith=day.strftime('%Y%m%d'))
When I do qs.count() I got this:
...
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/models/query.py", line 93, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/models/query.py", line 108, in __len__
self._result_cache.extend(self._iter)
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/models/query.py", line 317, in iterator
for row in compiler.results_iter():
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 775, in results_iter
for rows in self.execute_sql(MULTI):
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/backends/util.py", line 41, in execute
return self.cursor.execute(sql, params)
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/backends/oracle/base.py", line 717, in execute
six.reraise(utils.DatabaseError, utils.DatabaseError(*tuple(e.args)), sys.exc_info()[2])
File "/home/diegueus9/dev/odm/local/lib/python2.7/site-packages/django/db/backends/oracle/base.py", line 710, in execute
return self.cursor.execute(query, self._param_generator(params))
DatabaseError: ORA-02395: exceeded call limit on IO usage
However If I execute the query with the squirrel client I got the number, so it's django maybe doing additional querys that raise the error? How I can make the count work with django?
I though of use raw sql but then I got a problem escaping the % in the LIKE part
Packages versions
Python==2.7.4
Django==1.5.4
cx-Oracle==5.1.2
six==1.4.1
Update 20131023
Following the suggestion of #alko, I added a print statement in django.db.backends.oracle.base at line 709 like this:
print query, self._param_generator(params)
Then I executed that query with those params in Squirrel and still got the number of the count.
My manual query was:
select count(*) from TBAAD300 where AGENCY_ID=201 and USE_DATE LIKE '20130930%'
The query that django uses is:
SELECT COUNT(*) FROM "TBAAD300" WHERE ("TBAAD300"."AGENCY_ID" = :arg0 AND "TBAAD300"."USE_DATE" LIKE TRANSLATE(:arg1 USING NCHAR_CS) ESCAPE TRANSLATE('\' USING NCHAR_CS) )
with [u'201', u'20130930%'] as params
Then I executed the same query in Squirrel and the result is 130410, but when django does, the same error is raised, the full queries that were printed by the print statement are:
ALTER SESSION SET NLS_TERRITORY = 'AMERICA' []
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS' NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF' TIME_ZONE = 'UTC' []
SELECT 1 FROM DUAL WHERE DUMMY LIKE TRANSLATE(:arg0 USING NCHAR_CS) ESCAPE TRANSLATE('\' USING NCHAR_CS) [u'X']
SELECT COUNT(*) FROM "TBAAD300" WHERE ("TBAAD300"."AGENCY_ID" = :arg0 AND "TBAAD300"."USE_DATE" LIKE TRANSLATE(:arg1 USING NCHAR_CS) ESCAPE TRANSLATE('\' USING NCHAR_CS) ) [u'201', u'20130930%']
Is Oracle your prod db? ORA-02395 means IO restrictions, https://forums.oracle.com/thread/655458, seems your count() query leads to a full scan on a large table in db with restrictions on block reads per query.
I'd suggest to reconsider your data model, can you use DateField for use_date field and you replace your query with filter(use_date__gte = ..., use_date___lte = ...)? In this case you can drastically increase performance and IO usage by adding an index on use_date field, since count and alike queries can use it if appropriate.
UPDATE (added here to fine format SQL)
What is your Oracle DBMS version? Are you running statements under the same user credentials? What are resourse limits for your user (this can help).
To check what resource consumtion metrics are, you can also run following command in sql client:
exec runstats_pkg.rs_start;
<YOUR statement>
exec runstats_pkg.rs_stop;
select * from stats where name like '%session logical reads%';
select * from stats where name like '%consistent gets%';
also an execution plan may be handy
set autotrace on;
<YOUR STATEMENT>
set autotrace off;
as described here

Categories

Resources